CN111494177A - Vision training method considering visual development of both eyes - Google Patents
Vision training method considering visual development of both eyes Download PDFInfo
- Publication number
- CN111494177A CN111494177A CN202010509023.0A CN202010509023A CN111494177A CN 111494177 A CN111494177 A CN 111494177A CN 202010509023 A CN202010509023 A CN 202010509023A CN 111494177 A CN111494177 A CN 111494177A
- Authority
- CN
- China
- Prior art keywords
- vision
- training
- binocular
- visual
- dominant eye
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 176
- 238000002645 vision therapy Methods 0.000 title claims abstract description 60
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000011161 development Methods 0.000 title claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 122
- 230000004438 eyesight Effects 0.000 claims abstract description 103
- 238000001514 detection method Methods 0.000 claims abstract description 94
- 238000006073 displacement reaction Methods 0.000 claims abstract description 47
- 230000008447 perception Effects 0.000 claims abstract description 21
- 239000011521 glass Substances 0.000 claims description 19
- 230000004927 fusion Effects 0.000 abstract description 6
- 230000001953 sensory effect Effects 0.000 abstract description 3
- 201000009487 Amblyopia Diseases 0.000 description 28
- 238000010586 diagram Methods 0.000 description 12
- 238000012937 correction Methods 0.000 description 8
- 230000005764 inhibitory process Effects 0.000 description 8
- 208000014733 refractive error Diseases 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 230000006872 improvement Effects 0.000 description 7
- 208000004350 Strabismus Diseases 0.000 description 6
- 230000002159 abnormal effect Effects 0.000 description 6
- 208000029091 Refraction disease Diseases 0.000 description 4
- 230000004430 ametropia Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 230000035945 sensitivity Effects 0.000 description 4
- 238000011282 treatment Methods 0.000 description 4
- 206010049155 Visual brightness Diseases 0.000 description 3
- 201000010099 disease Diseases 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000002035 prolonged effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 208000030533 eye disease Diseases 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000002560 therapeutic procedure Methods 0.000 description 2
- 208000002177 Cataract Diseases 0.000 description 1
- 206010007747 Cataract congenital Diseases 0.000 description 1
- 206010015995 Eyelid ptosis Diseases 0.000 description 1
- 206010015996 Eyelid ptosis congenital Diseases 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 239000011324 bead Substances 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 201000003002 congenital ptosis Diseases 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 208000001882 hereditary congenital 1 ptosis Diseases 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 210000004373 mandible Anatomy 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 201000003004 ptosis Diseases 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000009958 sewing Methods 0.000 description 1
- 230000004382 visual function Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H5/00—Exercisers for the eyes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H5/00—Exercisers for the eyes
- A61H5/005—Exercisers for training the stereoscopic view
Landscapes
- Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Epidemiology (AREA)
- Pain & Pain Management (AREA)
- Physical Education & Sports Medicine (AREA)
- Rehabilitation Therapy (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Eye Examination Apparatus (AREA)
Abstract
A vision training method giving consideration to binocular vision development comprises two parts, namely a binocular object image perception detection module and a vision training module, wherein the binocular object image perception detection module also comprises a binocular simultaneous brightness-contrast relative threshold detection module, a binocular object image displacement detection module and a binocular object image unequal detection module; and the visual training module generates a visual training visual target suitable for training the patient at the current state of illness according to the detection result detected by the binocular object-image perception detection module. The fine training of the single eye under the vision field of the two eyes can obviously improve the eyesight and improve the efficiency; obstacles of competition of two eyes, displacement of object images and inequality of object images are cleared for establishing and perfecting sensory fusion, and the efficiency of establishing and perfecting binocular vision can be obviously improved.
Description
Technical Field
The invention relates to the field of disease treatment, in particular to a vision training method giving consideration to visual development of both eyes.
Background
Amblyopia is a functional eye disease which is caused by abnormal visual experiences such as monocular strabismus, ametropia, medium-high ametropia, form deprivation and the like in the visual development period, has the optimal correction vision of monocular or binocular lower than the normal value and has no organic lesions. The incidence rate of amblyopia is 3-5%, the amblyopia is one of the most common eye diseases of children with abnormal eyesight, and about 1000 million amblyopia patients exist in China. Amblyopia is a social problem which greatly affects the field, and an efficient rehabilitation method is urgently needed.
Abnormal interaction between form deprivation and the eyes is the two major causes of amblyopia. Amblyopia caused by high and medium-high ametropia, congenital cataract, ptosis and other causes, wherein the amblyopia is mainly deprived of form vision; while the origin of amblyopia caused by refractive error and strabismus is mainly the abnormal interaction between eyes.
In the traditional treatment method, amblyopia patients need to wear glasses for correcting ametropia, take an operation mode to remove shape deprivation factors such as cataract, ptosis and the like, then cover dominant eyes with better corrected vision, use eyes with lower corrected vision to watch and identify various visual targets capable of promoting visual excitation, promote visual further development of non-dominant eyes with lower corrected vision and improve vision. The picture elements comprise light flashes in various colors, figures with rotating black and white bar backgrounds, manual bead threading, manual sewing needles, fine pens for tracing figures and letters, figures and letters with different sizes and the like.
In ophthalmology, dominant eye refers to the eyes with higher corrected vision, the fixation eye of the strabismus, the eye with lower diopter of the refractive error person, and the inhibited eyes when the eyes see objects; the non-dominant eye refers to the eye with lower corrected vision, the dominant strabismus of the strabismus, the eye with higher diopter of the refractive error person, and the eye without inhibition when the eyes see objects; the two eyes correct the condition that the vision is the same, no strabismus, no refractive error and no one eye is inhibited, the dominant eye is defaulted to be the dominant eye, and the other eye is the non-dominant eye.
These conventional treatments for amblyopia have two general technical problems as follows.
The first problem is that: covering the dominant eye trains the non-dominant eye, only the visual ability of the non-dominant eye develops, and the visual development of the dominant eye is inhibited by the covering therapy. The two eyes correct the binocular amblyopia patient with different eyesight, namely, the amblyopia exists in the two eyes, only the condition that the eyesight is corrected by one eye is higher, and the eyesight is corrected by the other eye is lower, the problem that the rehabilitation of the eyesight of the two eyes cannot be simultaneously considered, and the two eyes also need to be respectively trained even if the rehabilitation is considered is solved, so that the training time of the amblyopia patient is greatly prolonged, and the rehabilitation period of the amblyopia is prolonged. Extensive visual training over a long period of time can lead to poor compliance of children with amblyopia, and therefore, the relationship between parents and children is often tense.
The second problem is that: in addition to the problem of abnormal correction of vision, the vast majority of the patients with monocular amblyopia also have abnormal development of the function of monocular amblyopia, and the traditional amblyopia treatment method cannot give consideration to both improvement of the correction of vision and development of the function of monocular amblyopia. Most patients need to take special time to train the function of monocular vision after the vision correction of the two eyes reaches the standard, and the rehabilitation period of amblyopia is objectively prolonged by several times. If the rehabilitation of the single vision function of the eyes is not well connected, the previously obtained vision correction improvement result is easily damaged to cause vision regression, so that the amblyopia is delayed and not cured.
In recent years, there are several new commonalties for amblyopia by ophthalmology and neuroscience experts at home and abroad: amblyopia is essentially a binocular problem, rather than a monocular problem; traditional covering therapy is based on the assumption that amblyopia is a problem of monocular vision; amblyopia is a disease which causes that the single vision function of two eyes cannot play the best state due to brain active inhibition, and further influences other visual functions including the eyesight; on the premise of improving the efficiency of correcting vision and improving eyesight, the amblyopia training method which can give consideration to the development of the function of single vision of both eyes is a better choice.
Disclosure of Invention
The invention aims to provide a vision training method giving consideration to binocular vision development, which comprises the steps of firstly carrying out binocular object image perception evaluation, detecting binocular object image perception parameters, generating a vision training visual target suitable for training the current state of illness of a patient on the basis of the binocular object image perception parameters, and obviously improving the improvement efficiency of correcting vision of two eyes on the premise of giving consideration to binocular vision development.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the invention discloses a vision training method considering binocular vision development, which comprises two parts, namely a binocular object image perception detection module and a vision training module, wherein the binocular object image perception detection module also comprises a binocular simultaneous visual brightness-contrast relative threshold detection module, a binocular object image displacement detection module and a binocular object image inequality detection module; the vision training module generates a vision training visual target suitable for training the patient at the current state of illness according to the detection result detected by the binocular object image perception detection module;
the binocular object image perception detection module and the vision training module are used for enabling two eyes to present different pictures by means of a binocular vision separating technology;
a vision training method for giving consideration to visual development of both eyes comprises the following implementation steps:
the first step is as follows: the patient is detected through the binocular object image perception detection module to obtain a detection result; the detection result comprises a relative threshold value of brightness-contrast ratio of the simultaneous vision of the two eyes, a relative displacement value of the object images of the two eyes and an unequal value of the object images of the two eyes;
the second step is that: the vision training module generates a vision training sighting target suitable for the current vision stage of the patient according to the detection result obtained in the first step;
the third step: and the patient identifies the visual training optotype and gives a feedback operation, and a new visual training optotype is generated on the basis of the original visual training optotype according to the feedback operation.
Preferably, the spatial position where the visual training optotype presented in front of one eye is located is a spatial position where the two-eye object image relative displacement value is displaced from the spatial position where the visual training optotype at the corresponding position generated in front of the other eye is located.
Preferably, the difference in size of the vision training optotypes at the corresponding positions of the two eyes as perceived by the patient is less than 2.5%.
The visual training optotype which is being trained is called an original visual training optotype, and the visual training optotype generated by the feedback operation of the patient on the original visual training optotype is called a new visual training optotype; when the new visual training sighting target is used for visual training, the new visual training sighting target becomes the original visual training sighting target for the training, and the process is repeated.
Preferably, when the accuracy of the feedback operation is more than or equal to 90%, the size of the new visual training optotype is less than or equal to the size of the original visual training optotype.
Preferably, when the accuracy of the feedback operation is less than or equal to 10%, the size of the new visual training optotype is larger than the size of the original visual training optotype.
Preferably, the patient perceives the visual training optotype simultaneously with both eyes, or the visual training optotype flickers before the dominant eye and the visual training optotype continuously appears before the non-dominant eye, or the visual training optotype flickers alternately before the dominant eye and the visual training optotype before the non-dominant eye.
Preferably, the binocular vision separating technology is at least one of shutter type 3D glasses binocular vision separating, red-green-color-difference 3D glasses binocular vision separating, red-blue-color-difference 3D glasses binocular vision separating, polarized light 3D glasses binocular vision separating, virtual reality helmet binocular vision separating and virtual reality glasses binocular vision separating.
Preferably, when the feedback operation is correct, the size of the new visual training optotype is less than or equal to the size of the original visual training optotype.
Preferably, when the feedback operation is wrong, the size of the new visual training optotype is larger than the size of the original visual training optotype.
The contrast refers to the ratio of the maximum brightness point to the minimum brightness point on the image; the visual target contrast value refers to the brightness ratio between the visual target graph and the background color of the visual target graph, and the greater the difference between the brightness of the visual target graph and the background color of the visual target graph is, the higher the visual target contrast is; the smaller the difference between the brightness of the visual target figure and the brightness of the background color of the visual target figure is, the lower the contrast of the visual target is.
When the brightness and contrast of the optotype are equal, the patient may have a state in which one eye is suppressed. At the moment, the brightness of the non-dominant eye anterior sighting mark is increased, or the brightness of the dominant eye anterior sighting mark is reduced, so that one-eye inhibition can be eliminated, and the simultaneous vision of two eyes can be generated; the contrast of the non-dominant anterior ocular sighting target is increased, or the contrast of the dominant anterior ocular sighting target is reduced, so that the one-eye inhibition can be eliminated, and the simultaneous vision of two eyes can be generated.
Preferably, the contrast of the visual training optotype in front of the non-dominant eye is more than or equal to that of the visual training optotype in front of the dominant eye.
Preferably, the brightness of the visual training optotype before the non-dominant eye is more than or equal to the brightness of the visual training optotype before the dominant eye.
When the binocular simultaneous visual brightness-contrast relative threshold detection module is used for detection, the visual target presented in front of the dominant eye is called a dominant eye simultaneous visual detection visual target, and the visual target presented in front of the non-dominant eye is called a non-dominant eye simultaneous visual detection visual target; the brightness and the contrast of the dominant eye simultaneous vision detection sighting mark and the non-dominant eye simultaneous vision detection sighting mark are adjustable; the relative threshold value of brightness-contrast of the binocular simultaneous vision is the brightness value-contrast value combination data of the dominant eye simultaneous vision detection sighting mark and the non-dominant eye simultaneous vision detection sighting mark when the non-dominant eye of the patient is converted from the inhibited state to the binocular simultaneous vision state.
In the same disease stage of the same patient, the brightness or the contrast of the visual target of one eye is changed through the detection of the relative threshold value detection module of the brightness-contrast of the simultaneous vision of the two eyes, so that a plurality of groups of relative threshold values of the brightness-contrast of the simultaneous vision of the two eyes can be detected.
The corresponding positions on the two-eye sighting marks of the two-eye object image displacement detection module are provided with positioning marks; the relative displacement value of the two-eye object images is the displacement value of the two-eye object images when the positioning mark on one eye sighting mark and the positioning mark on the other eye sighting mark are coincided and sensed by the patient.
The object images of the two eyes are different, which is also called as image difference or aberration of the two eyes in ophthalmology, and the image sizes or the shapes of the retinas of the two eyes are different.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
1. the single-eye fine training under the vision of both eyes can obviously improve the eyesight improving efficiency;
2. the intelligent judgment is carried out by adopting computer software logic, and the intelligent judgment is closely linked with the current vision and binocular vision states of the patient, so that the problem of low efficiency of manual intervention is reduced;
3. the visual training sighting target design mode of the method clears the obstacles of two-eye competition, object image displacement and object image inequality for the establishment and the perfection of sensory fusion, and can obviously improve the efficiency of establishment and the perfection of the two-eye vision.
Drawings
FIG. 1 is a schematic view of the binocular simultaneous brightness-contrast relative threshold detection interface;
FIG. 2 is a schematic diagram of the interface of the binocular object displacement detection module when the optotypes perceived by the patient are not separated;
FIG. 3 is a schematic diagram of an interface of the binocular object displacement detection module when the separation of the optotypes perceived by the patient occurs;
FIG. 4 is a schematic view of an interface presented by a display screen of the binocular image disparity detection module;
FIG. 5 is a schematic view of an interface before deformation of a visual target sensed by a patient when the binocular image unequal detection module performs binocular vertical image unequal value detection;
FIG. 6 is a schematic view of an interface of a patient after deformation of a visual target when the binocular portrait unequal detection module performs binocular vertical portrait unequal value detection;
FIG. 7 is a schematic view of an interface before deformation of a visual target sensed by a patient when the binocular image unequal detection module performs binocular horizontal image unequal value detection;
FIG. 8 is a schematic view of an interface of a patient after deformation of a visual target sensed by the patient when the binocular disparity detection module performs binocular horizontal disparity detection;
FIG. 9 is a schematic view of a dominant eye interface of the vision training module;
FIG. 10 is a schematic view of a non-dominant eye interface of the vision training module.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent; for better illustration of the present embodiment, some components of the drawings may be omitted, enlarged or reduced, and contrast or color differences may not represent the size and effect diagrams of actual products.
It will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
The implementation tools comprise wireless 3D stereo glasses (Invitroat 3D VISION Pro 2 generation), an Asustek Computer, an ASUS liquid crystal display (L CD MONITOR VG278, a 27-inch TN liquid crystal display, a display screen width-to-height ratio of 16:9, a resolution of 1920 x 1080, support of 3D display, a screen length and width = 597.73 mm × 336.22 mm), a Robotic G502 RGB wired optical mouse, and a Computer program written in C + + language, and an online and local database management system is built for retaining various data.
In this embodiment, the implementation method of the present invention includes the following steps:
firstly, basic information entry:
and related information is input before the detection is started, and related parameters are set. The method comprises the following specific steps: after the program is started, basic information of the patient is entered, including: basic information such as name, year, month and day of birth, sex, contact information, email, non-dominant eye and the like.
Secondly, preparing a patient:
the patient wears the optical correction glasses of oneself, and wireless 3D stereoscopic glasses are gone up to optical correction glasses overcoat, through wireless 3D stereoscopic glasses and system software and hardware collocation and setting, realize the binocular vision state of dividing. The height of the chair is adjusted to ensure that the midpoint of the two eyes of the patient is just opposite to the center of the display screen, the sight line is vertical to the plane of the display screen, and the sight distance is 50 cm. In order to obtain more stable head fixing and comfort, the head of a detected person can be fixed by means of a forehead support, a mandible support and the like, and after the head is fixed, the visual range is fixed.
If the patient uses the method for the first time or the last times, basic teaching needs to be performed firstly, so that the patient learns to click a button on a screen interface by using a mouse, and the family members of the patient can perform mouse operation instead of the patient under the description of the patient language or the body language.
To clarify the implementation method of the present invention, it is necessary to explain a point that the visual target presented in front of the eyes refers to the visual target appearance objectively presented on the display screen; the visual target perceived by the patient refers to the visual target appearance which is subjectively perceived by the patient. The two are distinct.
Thirdly, detecting the perception of the object image of the two eyes:
1. binocular simultaneous brightness-contrast relative threshold detection:
FIG. 1 is a schematic diagram of the binocular simultaneous brightness-contrast relative threshold detection interface. The color values of the background 1, the dominant eye contrast sighting mark 2 and the non-dominant eye contrast sighting mark 3 can be independently set, and the contrast change and the brightness change of the two-eye sighting mark can be realized by adjusting the color values of the background 1, the dominant eye contrast sighting mark 2 and the non-dominant eye contrast sighting mark 3.
The recording mode of the brightness value-contrast value combined data is as follows: (w-x: 1, y-z: 1), wherein w is the brightness of the dominant eye simultaneous vision detection sighting target, and x:1 is the contrast of the dominant eye simultaneous vision detection sighting target; y is the brightness of the non-dominant eye simultaneous vision detection sighting mark, and z:1 is the contrast of the non-dominant eye simultaneous vision detection sighting mark; for example, the detection result of the detection module for detecting the brightness-contrast relative threshold value for two eyes at the same time is recorded as (100 cd/m2-200:1,300cd/m2-400: 1), wherein cd/m2 is the unit of light intensity candela per square meter. The light intensity value and the contrast value can be obtained according to the current parameters of the display and the color values of different parts of the graphical interface on the display.
In the initial state, the RGB color values of the background 1 are R =0, G =0, B =0, i.e. pure black; the RGB color values of the dominant eye contrast optotype 2 and the non-dominant eye contrast optotype 3 are R =66, G =66, B =66, i.e. gray.
For the sake of clarity, the RGB color value change of the optotype and the background mentioned in the present embodiment is required to keep R = G = B, that is, the R value, the G value, and the B value are equally changed.
After a patient wears the wireless 3D stereoscopic glasses, the dominant eye can only see the dominant eye contrast sighting mark 2, and the non-dominant eye contrast sighting mark 3 cannot be seen; the non-dominant eye can only see the non-dominant eye contrast optotype 3 and can not see the dominant eye contrast optotype 2. Other visual elements on the interface are visible only to the non-dominant eye.
The dominant eye contrast sighting mark 2 is a square, the non-dominant eye contrast sighting mark 3 is a circle, and the side length of the dominant eye contrast sighting mark 2 and the diameter of the non-dominant eye contrast sighting mark 3 are 139 pixels. Under the condition that the color value of the background 1 is not changed, the RGB color value of the non-dominant eye contrast sighting mark 3 is increased, and the purpose of increasing the brightness and the contrast of the sighting mark of the non-dominant eye contrast sighting mark 3 is achieved.
Inquiring whether the patient can see the dominant eye contrast sighting mark 2 and the non-dominant eye contrast sighting mark 3 at the same time, if the patient can see the two sighting marks at the same time, namely the patient has simultaneous vision on the current interface, clicking the button 4 to realize correct feedback, and entering the next detection and evaluation; if the patient cannot see the non-dominant eye contrast optotype 3, clicking the button 5, namely, false feedback, increases the brightness and the contrast of the non-dominant eye contrast optotype 3, namely, 1 is added to the R value, the G value and the B value in the RGB color values of the non-dominant eye contrast optotype 3, so that the luminance becomes gray with higher brightness than before.
The patient continues to make false feedback and the visual target contrast and brightness of the non-dominant eye contrast visual target 3 continues to increase until the patient makes a correct feedback. And when the patient makes correct feedback, recording the RGB color value of the dominant eye visual target and the RGB color value of the non-dominant eye visual target as the relative threshold value of the brightness-contrast of the two eyes at the same time.
When the patient continuously makes an error feedback, the RGB color values of the non-dominant eye contrast sighting target 3 reach R =255, G =255 and B =255, the error feedback is still made, the RGB color values of the dominant eye contrast sighting target 2 are reduced, namely the R value, the G value and the B value in the RGB color values of the dominant eye contrast sighting target 2 are all reduced by 1 until the patient makes a correct feedback, and the RGB color values of the dominant eye sighting target and the RGB color values of the non-dominant eye sighting target at the moment are recorded as the relative threshold value of the brightness-contrast of the current binocular simultaneous vision; when the RGB color values of the non-dominant eye contrast optotype 3 are R =255, G =255, B =255, and the RGB color values of the dominant eye contrast optotype 2 are as low as R =30, G =30, B =30, the patient still makes an erroneous feedback, and the current binocular simultaneous brightness-contrast relative threshold is null.
The relative threshold detection of brightness-contrast is repeatedly carried out 3 times every time the eyes simultaneously, and the average value is taken as the current final record value.
2. Detecting the object image displacement of two eyes:
when a patient has oblique vision, after fusion of both eyes is broken, the object images seen by both eyes can be displaced, and the originally close object images seen by both eyes can be separated.
Fig. 2 is a schematic interface diagram of the two-eye object image displacement detection module when the visual targets sensed by the patient are not separated. The corner vertex of the non-dominant eye displacement sighting mark 8 and the corner vertex of the dominant eye displacement sighting mark 6 are positioned at the same coordinate point, namely the space position of the corner vertex 7. The corner vertex 7 is a positioning mark on the dominant eye displacement optotype 6. Wherein, the RGB color values of the interface background are R =0, G =0 and B = 0; the RGB color value of the non-dominant eye displacement sighting mark 8 is larger than or equal to the RGB color value of the non-dominant eye sighting mark in the relative threshold value of the binocular simultaneous brightness-contrast, and the RGB color value of the dominant eye displacement sighting mark 6 is the RGB color value of the dominant eye sighting mark in the relative threshold value of the binocular simultaneous brightness-contrast.
Fig. 3 is a schematic interface diagram of the two-eye object image displacement detection module when the visual targets sensed by the patient are separated. At this time, the corner vertex 7 on the dominant eye displacement optotype 6 and the corner vertex 9 on the non-dominant eye displacement optotype 8 are not at the same coordinate point, and the corner vertex 9 is a positioning mark on the non-dominant eye displacement optotype 8.
After the patient wears the wireless 3D stereoscopic glasses, the dominant eye can only see the dominant eye displacement sighting mark 6 and can not see the non-dominant eye displacement sighting mark 8; the non-dominant eye can only see the non-dominant eye displacement optotype 8, and cannot see the dominant eye displacement optotype 6. Other visual elements on the interface are visible only to the non-dominant eye.
At this time, the dominant eye displacement sighting mark 6 is fixed in position, and the patient is ordered to click the button 12, the button 10, the button 14 and the button 13 according to the perceived position of the non-dominant eye displacement sighting mark 8, so that the non-dominant eye displacement sighting mark 8 moves up, down, left and right, and the corner vertex 9 and the corner vertex 7 are coincided. After the vertex 9 of the corner is overlapped with the vertex 7 of the corner, the patient clicks a button 11 to finish the object image displacement detection of the two eyes; and recording the coordinate displacement value of the corner vertex 9 as the relative displacement value of the object image of the two eyes.
The coordinate displacement values are recorded as (± h, ± g). Wherein + -h is the horizontal shift value, h is the horizontal shift pixel value that occurs, a sign before h is + indicates a horizontal right shift, and a sign before h is-indicates a horizontal left shift; where ± g is the vertical shift value, g is the vertical shift pixel value that occurs, a sign + before g indicates a vertical shift up, and a sign-before h indicates a vertical shift down.
And repeating the detection of the relative displacement value of the object image of the two eyes for 3 times every time, and taking the average value as the final record value of the current object image displacement value of the two eyes.
3. Detecting the object image inequality of two eyes:
fig. 4 is a schematic interface diagram presented by a display screen of the binocular image disparity detection module. The dominant eye object image unequal sighting target 15 and the non-dominant eye object image unequal sighting target 16 are E-shaped sighting targets with equal side length. Wherein, the RGB color values of the interface background are R =0, G =0 and B = 0; the RGB color value of the non-dominant eye object unequal sighting target 16 is larger than or equal to the RGB color value of the non-dominant eye sighting target in the relative threshold value of the binocular simultaneous brightness-contrast, and the RGB color value of the dominant eye object unequal sighting target 15 is equal to the RGB color value of the dominant eye sighting target obtained when the binocular simultaneous brightness-contrast relative threshold value is detected.
After a patient wears the wireless 3D stereoscopic glasses, the dominant eye can only see the dominant eye object image unequal sighting mark 15 and cannot see the non-dominant eye object image unequal sighting mark 16; the non-dominant eye can only see the non-dominant eye object image unequal sighting target 16 and cannot see the dominant eye object image unequal sighting target 15. Other visual elements on the interface are visible only to the non-dominant eye.
When the object image inequality detection in the vertical direction of the two eyes is performed, the position and the shape of the dominant eye object image inequality sighting target 15 are kept fixed, the non-dominant eye object image inequality sighting target 16 performs integral displacement, and the integral displacement value is the object image displacement value of the two eyes obtained by the object image displacement detection of the two eyes. At this time, the right side of the dominant eye object image unequal sighting target 15 perceived by the patient is close to the left side of the non-dominant eye object image unequal sighting target 16, and the lower side of the dominant eye object image unequal sighting target 15 perceived by the patient is on the same horizontal line with the lower side of the non-dominant eye object image unequal sighting target 16.
Fig. 5 is a schematic interface diagram before the visual target is deformed, which is sensed by the patient when the binocular object image inequality detection module performs binocular vertical object image inequality detection. The non-dominant eye of the patient shown in this embodiment sees an object image that is magnified in both the horizontal and vertical directions. Ordering the patient to observe the dominant eye object image unequal sighting target 15 and the non-dominant eye object image unequal sighting target 16, controlling the upper side edge of the non-dominant eye object image unequal sighting target 16 formed by the vertex 18 and the vertex 19 to move up and down by clicking the button 20 and the button 22, enabling the upper side edge of the non-dominant eye object image unequal sighting target 16 formed by the vertex 18 and the vertex 19 to be level with the horizontal line where the vertex 17 is located, and further enabling the non-dominant eye object image unequal sighting target 16 to be deformed in the vertical direction on the whole; after the upper side of the non-dominant eye object unequal optotype 16 composed of the vertex 18 and the vertex 19 is flush with the horizontal line where the dominant eye optotype vertex 17 is located, the button 21 is clicked to confirm the detection result.
Fig. 6 is an interface schematic diagram of the deformed sighting target sensed by the patient when the binocular object image inequality detection module performs binocular vertical object image inequality detection, at this time, the upper side edge of the non-dominant eye object image inequality sighting target 16 formed by the non-dominant eye sighting target vertex 18 and the non-dominant eye sighting target vertex 19 is flush with the horizontal line where the vertex 17 is located, the pixel value from the vertex 18 to the vertex 24 on the display screen at this time is recorded as f, and the pixel value from the vertex 17 to the vertex 23 on the display screen at this time is recorded as k.
The vertical scaling factor formula for the non-dominant eye optotypes is j = (f/k) × 100%. j is a vertical direction scaling coefficient of the non-dominant eye sighting mark and is used for describing the object image unequal degree of the non-dominant eye sighting mark in the vertical direction compared with the dominant eye sighting mark. When j is less than 100%, prompting that the object image perceived by the non-dominant eye is amplified in the vertical direction compared with the object image perceived by the dominant eye; when j is larger than 100%, the object image perceived by the non-dominant eye is reduced in the vertical direction compared with the object image perceived by the dominant eye; when j is equal to 100%, the patient is prompted to have no problem of object image disparity in the vertical direction.
It should be noted that the above f-value and k-value are the lengths of the line segments objectively presented on the display screen, not the perception of the lengths of the line segments perceived by the patient. The patient shown in this example detected f values less than k values.
Fig. 7 is a schematic interface diagram before the visual target is deformed, which is sensed by the patient when the binocular objective unequal detection module performs binocular horizontal objective unequal value detection. Wherein, the RGB color values of the interface background are R =0, G =0 and B = 0; the RGB color value of the non-dominant eye object unequal optotype 27 is larger than or equal to the RGB color value of the non-dominant eye optotype in the relative threshold value of the brightness-contrast of the simultaneous vision of the two eyes, and the RGB color value of the dominant eye object unequal optotype 31 is equal to the RGB color value of the dominant eye optotype in the relative threshold value of the brightness-contrast of the simultaneous vision of the two eyes.
When the object image inequality detection in the horizontal direction of the two eyes is carried out, the position and the shape of the object image inequality sighting target 31 of the dominant eye are kept fixed, the object image inequality sighting target 27 of the non-dominant eye carries out integral displacement, and the displacement value is the object image displacement value of the two eyes obtained by the object image displacement detection of the two eyes. At this time, the upper side of the dominant eye object image inequality sighting target 31 perceived by the patient is close to the lower side of the non-dominant eye object image inequality sighting target 27, and the right side of the dominant eye object image inequality sighting target 31 perceived by the patient is on the same vertical line with the right side of the non-dominant eye object image inequality sighting target 27.
Ordering the patient to observe the sighting marks seen by the two eyes, and clicking the button 25 and the button 26 to control the left side edge of the non-dominant eye object image unequal sighting mark 27 formed by the vertex 33 and the vertex 34 to move left and right, so that the left side edge of the non-dominant eye object image unequal sighting mark 27 formed by the vertex 33 and the vertex 34 is flush with a vertical line where the vertex 32 is located, and further the non-dominant eye object image unequal sighting mark 27 is deformed in the horizontal direction on the whole; after the non-dominant eye object formed by the vertexes 33 and 34 does not correspond to the left side of the sighting mark 27 and the vertical line where the vertex 32 is located, the button 30 is clicked to confirm the detection result.
Fig. 8 is a schematic diagram of an interface of the deformed sighting target sensed by the patient when the binocular eye objective inequality detection module performs binocular eye horizontal objective inequality detection, where the left side of the non-dominant eye objective inequality sighting target 27 formed by the vertex 33 and the vertex 34 is flush with a vertical line where the vertex 32 is located, a pixel value between the vertex 33 and the vertex 28 at this time is recorded as m, and a pixel value between the vertex 32 and the vertex 29 at this time is recorded as n.
The horizontal scaling factor formula for the non-dominant eye optotypes is p = (m/n) × 100%. And p is a horizontal scaling coefficient of the non-dominant eye sighting target and is used for describing the degree of object image inequality of the non-dominant eye sighting target in the horizontal direction. When p is less than 100%, prompting that the object image perceived by the non-dominant eye is amplified in the horizontal direction compared with the object image perceived by the dominant eye; when p is more than 100%, the object image perceived by the non-dominant eye is reduced in the horizontal direction compared with the object image perceived by the dominant eye; when p is equal to 100%, the patient is prompted to have no problem of object image difference in the horizontal direction.
It should be noted that the above m and n values are the lengths of the line segments objectively presented on the display screen, not the perception of the lengths of the line segments perceived by the patient. The patient shown in this example detected a value for m that was less than the value for n.
Fourthly, visual training:
FIG. 9 is a schematic diagram of a dominant eye interface of the vision training module. The central part of the interface is provided with a plurality of squares, and the squares and the graphic elements on the squares, namely the visual training sighting marks, are used for the patient to identify so as to play a role in visual training. The blocks 38, 39 and the graphic elements on the blocks are the dominant eye vision training optotypes. Compared with the method that only one visual training sighting target is arranged in the interface, a plurality of visual training sighting targets are close to and listed in the same picture, so that the identification difficulty can be increased, and the problem of difficulty in distinguishing and reading of amblyopia patients can be solved. The square frame 35 is a bounding box of the square 38, and the side length of the square frame 35 is 139 pixels. The visual training optotypes currently to be identified are shown in block 38, and their peripheries are regularly flashed to indicate a dashed box 41, which is used to indicate patient fixation and identification. Each visual training visual target is provided with 4 dot placement positions, and the number of dots and the dot placement positions on different visual training visual targets are randomly generated by a system so as to prevent a patient from reciting identification answers. Dots 36 and 37 are dots placed at two dot placement positions on the upper left and upper right of the square 38.
The interface background of fig. 9 is a random color or a random picture. The RGB color values of the ground color of the block 38, the block 39, and the scoring block 40 all take R =0, G =0, and B =0 initially, i.e., pure black; the RGB color values of the dots and bounding boxes on the square 38, the dots and bounding boxes on the square 39, the text and bounding boxes on the scoring box 40, and the prompt dashed box 41 all equal the dominant eye optotype RGB color values obtained when the relative brightness-contrast thresholds are detected for both eyes simultaneously.
FIG. 10 is a schematic view of a non-dominant eye interface of the vision training module. The background of the interface of fig. 10 is the same as the background of the interface of fig. 9. Blocks 43 and 45 are the non-dominant eye vision training optotypes. Blocks 43 and 38 are a set of the vision training optotypes in corresponding positions for the left and right eyes; the blocks 45 and 39 are a group of visual training sighting marks at the corresponding positions of the left eye and the right eye; the positions of the dots on the visual training sighting marks corresponding to the left eye and the right eye can be the same or different.
The option buttons 47, 48 and 49 are answer options, and only one answer option represents correct feedback for the patient to identify and give feedback.
The RGB color values of the ground color of the square 43, the square 45, the option button 47, the option button 48, the option button 49, and the score box 46 all initially take values of R =0, G =0, and B =0, i.e., pure black; the values of the RGB color values of the dots and the bounding box on the square 43, the dots and the bounding box on the square 45, the dots and the bounding box on the option button 47, the dots and the bounding box on the option button 48, the dots and the bounding box on the option button 49, the characters and the bounding box on the score box 46 and the prompt dotted frame 50 are all more than or equal to the RGB color values of the non-dominant eye optotype obtained when the brightness-contrast of the eyes is detected relative to the threshold value simultaneously. The design aims to enable the contrast of the non-dominant eye vision training visual target to be larger than or equal to that of the dominant eye vision training visual target, or enable the brightness of the non-dominant eye vision training visual target to be larger than or equal to that of the dominant eye vision training visual target, so that the eyes can see at the same time.
The block 43 and the block 38 are a set of visual training optotypes at corresponding positions of the left and right eyes, and the spatial position of the block 43 in the interface of fig. 10 is equivalent to the spatial position of the block 38 in the interface of fig. 9 shifted by the object-image shift values of the two eyes; similarly, the other visual training optotypes, the option buttons, the prompt dotted line frames, the score frames, and the interface background in the interface of fig. 10 are all shifted in the same logic. The design mainly aims to overlap the object images seen by two eyes on the premise that the backgrounds of the two eyes are the same, and the binocular fusion is facilitated.
The size of the block 43 in the horizontal direction is equal to the size of the block 38 in the horizontal direction multiplied by p, wherein p is a horizontal direction scaling coefficient of the non-dominant eye optotype detected and obtained by the two-eye object image inequality detection module; the size of the block 43 in the vertical direction is equal to the size of the block 38 in the vertical direction multiplied by j, wherein j is a vertical direction scaling coefficient of the non-dominant eye optotype obtained by the detection of the two-eye object image inequality detection module; the box 43 and its upper box border, box background and dots are all scaled and distorted in size with the same logic. Similarly, the other visual training optotypes, the prompt dotted line frames, the score frames, and the interface background in the interface of fig. 10 are scaled and deformed in size in the same logic. The design is mainly aimed at making the size difference of visual training sighting marks at corresponding positions perceived by two eyes in the visual training module less than 2.5%.
And when the relative threshold value of the brightness and the contrast of the binocular simultaneous vision obtained in the detection of the relative threshold value of the brightness and the contrast of the binocular simultaneous vision is null, prompting the patient that the non-dominant eye has serious inhibition. At this time, the dominant eye visual sense training visual target flickers and the non-dominant eye visual sense training visual target continuously appears, or the dominant eye visual sense training visual target and the non-dominant eye visual sense training visual target flickers alternately, so that the existence of the non-dominant eye visual sense training visual target can be strengthened to play a role in removing inhibition. In this embodiment, if the relative threshold value of brightness-contrast for simultaneous visual brightness-contrast for two eyes obtained during the detection of the relative threshold value of brightness-contrast for two eyes is null, the interface of fig. 9 wholly flickers for 1 second and 1 second of black screen, so that the suppressed state of the non-dominant eye can be temporarily eliminated at the moment of black screen, and the simultaneous visual of two eyes can be awakened.
When the relative threshold value of the brightness-contrast ratio of the binocular simultaneous vision obtained during the detection of the brightness-contrast ratio of the binocular simultaneous vision is not null, the RGB color values of the non-dominant eye visual training visual target are all larger than or equal to the RGB color values of the non-dominant eye visual target obtained during the detection of the brightness-contrast ratio of the binocular simultaneous vision, the RGB color values of the dominant eye visual training visual target are all equal to the RGB color values of the dominant eye visual target obtained during the detection of the brightness-contrast ratio of the binocular simultaneous vision, and therefore the patient can be ensured to simultaneously perceive the dominant eye visual training visual target and the non-dominant eye visual training visual target.
When training is started, the patient watches the visual training sighting marks on the display screen through two eyes, specifically, watches the square of the regularly twinkling prompt dotted frame on the periphery, namely the square 38 wrapped by the dominant eye watching prompt dotted frame 41 and the square 43 wrapped by the non-dominant eye watching prompt dotted frame 50. The box 38 and the box 43 will overlap and merge into a merged box in the visual center; in the fused square, the dot 36 and the dot 42 are overlapped on the upper left corner of the fused square, the dot 37 is positioned on the upper right corner of the fused square, and the dot 44 is positioned on the lower right corner of the fused square; namely, a dot is arranged on each of the upper left corner, the upper right corner and the lower right corner of the fusion square; that is, the option button for correct feedback is the option button 48. The graphic range of the option buttons 47, 48 and 49 is a mouse click response area, and the patient mouse clicks the option button 48, namely correct feedback is obtained; clicking the option button 47 and the option button 49 is error feedback; clicking the area outside the option button, and clicking the option button if the system pops up the window. The option buttons are only placed on the non-dominant eye interface, so that the two-eye competitive advantage of the non-dominant eye is strengthened.
Whether the patient makes a correct or incorrect feedback, the dashed prompt box is shifted to the next visual training optotype, i.e. wrapped around blocks 39 and 45; meanwhile, three new option buttons are automatically generated, and only one option button is a correct feedback option button; correctly feeding back the graphic style of the option buttons except the option buttons as interference options; the preferable graphic style of the interference option is a square block graphic style in a prompt dashed frame seen by a dominant eye monocular or a non-dominant eye monocular.
The system gives 1-10 points for reward according to the time length of correct feedback given by the patient each time. The shorter the time taken to give correct feedback, the higher the score; the longer the time it takes to give the correct feedback, the lower the score. The time for giving correct feedback exceeds 5 seconds, and 0 minute is obtained; when the patient gives wrong feedback, deducting 3 points; the score number may be negative. The interface can be added with motivational voice, music and animation, and different prompting and motivational visual and auditory elements are given according to the feedback condition of the patient.
The system settles the score once every time the patient makes two feedbacks, and the score corresponds to the feedback accuracy and the feedback speed; the higher the accuracy is and the faster the feedback speed is, the higher the score is; conversely, the lower the score. The vision training optotype currently identified by the patient is called an original vision training optotype; and generating a new visual training level in front of the two eyes according to the scores, wherein the visual training optotypes in the new visual training level are called new visual training optotypes.
In another embodiment of the invention, the system settles the score once per feedback operation made by the patient and generates a new visual training optotype based on whether a correct feedback or an incorrect feedback is made in combination with the correct feedback.
In this example, the logic for generating the dimensions and contrast of the new visual training optotypes in the new visual training level is as follows:
1. if the scores of the two feedbacks exceed 15 points, the size of the new visual training optotype is reduced by 3% in a horizontal-vertical equal ratio on the basis of the size of the original visual training optotype;
2. the scores of the two feedbacks are 10-15 points, and then the size of the new visual training optotype is reduced by 2% in a horizontal-vertical equal ratio on the basis of the size of the original visual training optotype;
3. if the scores of the two feedbacks are 5-9 minutes, the size of the new visual training optotype is kept unchanged, the RGB color value of the dominant eye visual training optotype is increased by 5-10, and the RGB color value of the non-dominant eye visual training optotype is decreased by 5-10;
4. if the scores of the two feedbacks are between 0 and 4 points, the size and the RGB color value of the new visual training optotype are kept unchanged;
5. and if the scores of the two feedbacks are negative numbers, the size of the new visual training sighting target is amplified by 1-3% in a horizontal-vertical equal ratio on the basis of the size of the original visual training sighting target.
In this embodiment, the brightness and contrast adjustment of the binocular vision training optotype is realized by increasing or decreasing the RGB color values of the dots in the vision training optotype, and for example, when the RGB color values of the dots are changed from R =80, G =80, B =80 to R =85, G =85, and B =85, that is, the brightness and contrast of the vision training optotype are increased; and when the RGB color value of the dot is changed from R =80, G =80, B =80 to R =75, G =75, B =75, that is, the brightness and contrast of the visual training optotype are reduced.
The smaller the size of the visual training sighting target is, the greater the training difficulty is, and the more beneficial the overcoming of central inhibition and the improvement of fine eyesight are. The larger the visual training optotype size is, the smaller the training difficulty is. The smaller the brightness of the visual training sighting target is, the greater the training difficulty is; the greater the brightness of the visual training optotype, the less training difficulty. The smaller the contrast of the visual training sighting target is, the greater the training difficulty is, and the more favorable the improvement of the contrast sensitivity is; the greater the contrast of the visual training optotype, the less training difficulty.
As the above-described visual training progresses, the size, brightness, and contrast of the visual training optotypes will gradually approach the patient's resolution limit, i.e., the patient's spatial frequency-contrast sensitivity threshold level. The training is carried out on the basis of the visual training optotype with the spatial frequency-contrast sensitivity threshold level, so that the vision improvement efficiency can be obviously improved, which is a consensus of the ophthalmology community.
After the patient is trained for a period of time by using the method disclosed by the invention, the spatial frequency-contrast sensitivity threshold of the patient is gradually increased. Because of the system algorithm, the size, brightness and contrast of the visual training optotypes generated by the system can be changed along with the change of the visual ability of the patient, so that the visual training can be always kept at a high efficiency level.
According to the feedback mode of the vision training module of the method disclosed by the invention, correct feedback can be made by a binocular matching method. Therefore, the method has the advantages of establishing and perfecting the binocular vision, balancing the binocular vision, overlapping the object images of the two eyes, eliminating the monocular inhibition and reducing the object image inequality, improving the vision improving efficiency and synchronously improving the binocular vision function, and achieving multiple purposes.
And the patient performs binocular object image perception detection once every other day, and corrects the parameter setting in the vision training module according to the latest binocular object image perception detection result.
The system sets the duration of each visual training period to be 15-20 minutes, and after the duration is over, the pop-up window prompts the patient to finish training. Training for 2-3 periods every day, and gradually stopping the training until the vision correction and sensory fusion ability of the patient reaches the standard and is stable.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (10)
1. A vision training method giving consideration to binocular vision development comprises two parts, namely a binocular object image perception detection module and a vision training module, wherein the binocular object image perception detection module also comprises a binocular simultaneous brightness-contrast relative threshold detection module, a binocular object image displacement detection module and a binocular object image unequal detection module; the vision training module generates a vision training visual target suitable for training the patient at the current state of illness according to the detection result detected by the binocular object image perception detection module;
the binocular object image perception detection module and the vision training module are used for enabling two eyes to present different pictures by means of a binocular vision separating technology;
a vision training method for giving consideration to visual development of both eyes comprises the following implementation steps:
the first step is as follows: the patient is detected through the binocular object image perception detection module to obtain a detection result; the detection result comprises a relative threshold value of brightness-contrast ratio of the simultaneous vision of the two eyes, a relative displacement value of the object images of the two eyes and an unequal value of the object images of the two eyes;
the second step is that: the vision training module generates a vision training sighting target suitable for the current vision stage of the patient according to the detection result obtained in the first step;
the third step: and the patient identifies the visual training optotype and gives a feedback operation, and a new visual training optotype is generated on the basis of the original visual training optotype according to the feedback operation.
2. The vision training method for binocular vision development according to claim 1, wherein the spatial position of the visual training optotype presented in front of one eye is a spatial position of the visual training optotype at a corresponding position generated in front of the other eye, where the displacement of the relative displacement value of the two-eye object images occurs.
3. A vision training method compatible with binocular vision development according to claim 1, wherein the size difference of the vision training optotypes at the corresponding positions of the two eyes perceived by the patient is less than 2.5%.
4. The vision training method for both visual development of eyes as claimed in claim 1, wherein the size of the new visual training optotype is smaller than or equal to the size of the original visual training optotype when the accuracy of the feedback operation is greater than or equal to 90%.
5. The vision training method for binocular vision development as claimed in claim 1, wherein when the accuracy of the feedback operation is less than or equal to 10%, the size of the new vision training optotype is larger than the size of the original vision training optotype.
6. The vision training method for binocular vision development according to claim 1, wherein the patient perceives the visual training targets simultaneously with both eyes, or the visual training targets blink before the dominant eye and the visual training targets continuously appear before the non-dominant eye, or the visual training targets blink alternately before the dominant eye and the non-dominant eye.
7. The vision training method for binocular vision development according to claim 1, wherein the binocular vision separating technology is at least one of shutter type 3D glasses binocular vision separating, red-green-color-difference 3D glasses binocular vision separating, red-blue-color-difference 3D glasses binocular vision separating, polarized light 3D glasses binocular vision separating, virtual reality helmet binocular vision separating, and virtual reality glasses binocular vision separating.
8. The vision training method for binocular vision development according to claim 1, wherein when the feedback operation is correct, the size of the new vision training optotype is smaller than or equal to the size of the original vision training optotype.
9. The vision training method for binocular vision development according to claim 1, wherein when the feedback operation is incorrect, the size of the new vision training optotype is larger than the size of the original vision training optotype.
10. The vision training method for binocular vision development according to claim 1, wherein the contrast of the vision training optotypes before the non-dominant eye is greater than or equal to the contrast of the vision training optotypes before the dominant eye.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010509023.0A CN111494177A (en) | 2020-06-07 | 2020-06-07 | Vision training method considering visual development of both eyes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010509023.0A CN111494177A (en) | 2020-06-07 | 2020-06-07 | Vision training method considering visual development of both eyes |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111494177A true CN111494177A (en) | 2020-08-07 |
Family
ID=71863657
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010509023.0A Pending CN111494177A (en) | 2020-06-07 | 2020-06-07 | Vision training method considering visual development of both eyes |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111494177A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112120905A (en) * | 2020-09-24 | 2020-12-25 | 杰雯 | Eye movement tracking system and binocular vision-based stereoscopic vision training device |
CN112674709A (en) * | 2020-12-22 | 2021-04-20 | 泉州装备制造研究所 | Amblyopia detection method based on anti-noise |
CN112842251A (en) * | 2021-04-23 | 2021-05-28 | 广东视明科技发展有限公司 | Binocular balance relation evaluation system for binocular competition |
CN113662822A (en) * | 2021-07-29 | 2021-11-19 | 广州视景医疗软件有限公司 | Visual target adjusting method based on eye movement, visual training method and device |
CN116098794A (en) * | 2022-12-30 | 2023-05-12 | 广州视景医疗软件有限公司 | De-inhibition visual training method and device |
CN116098795A (en) * | 2023-02-16 | 2023-05-12 | 广州视景医疗软件有限公司 | Visual training method and device based on depth parallax |
CN116098794B (en) * | 2022-12-30 | 2024-05-31 | 广州视景医疗软件有限公司 | De-inhibition visual training method and device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101292928A (en) * | 2007-04-25 | 2008-10-29 | 丛繁滋 | Hypometropia, amblyopia therapeutic equipment capable of respectively adjusting relative luminous sight point lightness of right and left eyes |
KR20090040034A (en) * | 2007-10-19 | 2009-04-23 | 이창선 | Fusion training image and training method of it |
CN101947158A (en) * | 2009-12-18 | 2011-01-19 | 中国科学院光电技术研究所 | Binocular self-adapting optical visual perception learning training method and learning training instrument |
CN101947157A (en) * | 2009-12-18 | 2011-01-19 | 中国科学院光电技术研究所 | Eye self-adaptive optical visual perception learning and training method and learning and training instrument |
CN103876886A (en) * | 2014-04-09 | 2014-06-25 | 合肥科飞视觉科技有限公司 | Amblyopia treatment system |
CN105455774A (en) * | 2015-11-17 | 2016-04-06 | 中山大学中山眼科中心 | Psychophysical measurement method for controlling lower aniseikonia on basis of interocular contrast ratio |
CN110314073A (en) * | 2019-06-28 | 2019-10-11 | 彭磊 | A kind of maincenter optic nerve disinhibition system and method |
-
2020
- 2020-06-07 CN CN202010509023.0A patent/CN111494177A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101292928A (en) * | 2007-04-25 | 2008-10-29 | 丛繁滋 | Hypometropia, amblyopia therapeutic equipment capable of respectively adjusting relative luminous sight point lightness of right and left eyes |
KR20090040034A (en) * | 2007-10-19 | 2009-04-23 | 이창선 | Fusion training image and training method of it |
CN101947158A (en) * | 2009-12-18 | 2011-01-19 | 中国科学院光电技术研究所 | Binocular self-adapting optical visual perception learning training method and learning training instrument |
CN101947157A (en) * | 2009-12-18 | 2011-01-19 | 中国科学院光电技术研究所 | Eye self-adaptive optical visual perception learning and training method and learning and training instrument |
CN103876886A (en) * | 2014-04-09 | 2014-06-25 | 合肥科飞视觉科技有限公司 | Amblyopia treatment system |
CN105455774A (en) * | 2015-11-17 | 2016-04-06 | 中山大学中山眼科中心 | Psychophysical measurement method for controlling lower aniseikonia on basis of interocular contrast ratio |
CN110314073A (en) * | 2019-06-28 | 2019-10-11 | 彭磊 | A kind of maincenter optic nerve disinhibition system and method |
Non-Patent Citations (2)
Title |
---|
宋慧琴: "屈光不正者戴矫正眼镜后的不等像(四)", 中国眼镜科技杂志, no. 2009, pages 68 - 71 * |
杨晓军: "屈光参差、两眼像不等与立体视觉", 中国眼镜科技杂志, no. 1999, pages 25 - 27 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112120905A (en) * | 2020-09-24 | 2020-12-25 | 杰雯 | Eye movement tracking system and binocular vision-based stereoscopic vision training device |
CN112674709A (en) * | 2020-12-22 | 2021-04-20 | 泉州装备制造研究所 | Amblyopia detection method based on anti-noise |
CN112674709B (en) * | 2020-12-22 | 2022-07-29 | 泉州装备制造研究所 | Amblyopia detection method based on anti-noise |
CN112842251A (en) * | 2021-04-23 | 2021-05-28 | 广东视明科技发展有限公司 | Binocular balance relation evaluation system for binocular competition |
CN112842251B (en) * | 2021-04-23 | 2021-08-10 | 广东视明科技发展有限公司 | Binocular balance relation evaluation system for binocular competition |
CN113662822A (en) * | 2021-07-29 | 2021-11-19 | 广州视景医疗软件有限公司 | Visual target adjusting method based on eye movement, visual training method and device |
CN113662822B (en) * | 2021-07-29 | 2023-09-12 | 广州视景医疗软件有限公司 | Optotype adjusting method based on eye movement, visual training method and visual training device |
CN116098794A (en) * | 2022-12-30 | 2023-05-12 | 广州视景医疗软件有限公司 | De-inhibition visual training method and device |
CN116098794B (en) * | 2022-12-30 | 2024-05-31 | 广州视景医疗软件有限公司 | De-inhibition visual training method and device |
CN116098795A (en) * | 2023-02-16 | 2023-05-12 | 广州视景医疗软件有限公司 | Visual training method and device based on depth parallax |
CN116098795B (en) * | 2023-02-16 | 2024-03-12 | 广州视景医疗软件有限公司 | Visual training device based on depth parallax |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111494177A (en) | Vision training method considering visual development of both eyes | |
Stidwill et al. | Normal binocular vision: Theory, investigation and practical aspects | |
CN110381810B (en) | Screening device and method | |
JP2023071644A (en) | Digital therapeutic corrective spectacles | |
JP2020509790A5 (en) | ||
CN106491323B (en) | For treating the video system and device of amblyopia | |
Apfelbaum et al. | Considering apical scotomas, confusion, and diplopia when prescribing prisms for homonymous hemianopia | |
US10857060B2 (en) | Method and device for improving functional stereopsis in individuals with central vision loss | |
CN112807200B (en) | Strabismus training equipment | |
KR101948778B1 (en) | Cloud Interlocking Visual Enhancement Wearable Device | |
EP4073572A1 (en) | Systems and methods for improving binocular vision | |
CN110850596B (en) | Two-side eye vision function adjusting device and virtual reality head-mounted display equipment | |
CN110269586A (en) | For capturing the device and method in the visual field of the people with dim spot | |
RU2187237C2 (en) | Method for improving sight and/or sight deterioration prophylaxis for video image apparatus users | |
US20220276495A1 (en) | Visual function adjustment method and apparatus | |
CN110812146B (en) | Multi-region visual function adjusting method and device and virtual reality head-mounted display equipment | |
CN215994744U (en) | Training glasses and training system thereof | |
CN110882139B (en) | Visual function adjusting method and device by using graph sequence | |
CN110812145B (en) | Visual function adjusting method and device and virtual reality head-mounted display equipment | |
CN115280219A (en) | System and method for enhancing vision | |
CN115137623A (en) | Training glasses, training system and training method thereof | |
CN113413265A (en) | Visual aid method and system for visual dysfunction person and intelligent AR glasses | |
CN111436901A (en) | Object image inequality measurement method based on multiple control point mode | |
CN113080844B (en) | Visual inspection and visual training device for preferential retina areas | |
US11918287B2 (en) | Method and device for treating / preventing refractive errors as well as for image processing and display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |