CN110604540A - Binocular fusion failure judgment method, fusion function detection method and system - Google Patents

Binocular fusion failure judgment method, fusion function detection method and system Download PDF

Info

Publication number
CN110604540A
CN110604540A CN201911013107.9A CN201911013107A CN110604540A CN 110604540 A CN110604540 A CN 110604540A CN 201911013107 A CN201911013107 A CN 201911013107A CN 110604540 A CN110604540 A CN 110604540A
Authority
CN
China
Prior art keywords
annular
sighting
target
testee
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911013107.9A
Other languages
Chinese (zh)
Other versions
CN110604540B (en
Inventor
陈小荣
王文文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHONGQING NENGNENG TECHNOLOGY Co.,Ltd.
Original Assignee
Chongqing Health Pharmaceutical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Health Pharmaceutical Technology Co Ltd filed Critical Chongqing Health Pharmaceutical Technology Co Ltd
Priority to CN201911013107.9A priority Critical patent/CN110604540B/en
Publication of CN110604540A publication Critical patent/CN110604540A/en
Application granted granted Critical
Publication of CN110604540B publication Critical patent/CN110604540B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0041Operational features thereof characterised by display arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/08Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing binocular or stereoscopic vision, e.g. strabismus
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/08Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing binocular or stereoscopic vision, e.g. strabismus
    • A61B3/085Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing binocular or stereoscopic vision, e.g. strabismus for testing strabismus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection

Abstract

The invention discloses a binocular fusion failure judging method, a fusion function detecting method and a system thereof.A processing terminal randomly selects 2 annular visual targets with different numbers of gaps to generate a test image, a visual separating system respectively projects the 2 annular visual targets in the test image to the left eye and the right eye for image combination, a testee sends an observed result and an image combination result of the observed image to the processing terminal through an input terminal, the processing terminal judges whether the result input by the testee is consistent with a correct image combination result, if so, the annular visual targets in the test image are updated, and the distance between the annular visual targets is adjusted through horizontal displacement to enable the testee to observe the image again; if the images are inconsistent, the image fusion failure of the testee is judged only by updating the annular sighting marks in the test image and then allowing the testee to observe the images again until the testee has continuous observation errors, and the processing terminal can determine the fusion range of the testee through the relative displacement of the 2 annular test sighting marks.

Description

Binocular fusion failure judgment method, fusion function detection method and system
Technical Field
The invention relates to the field of data processing systems and methods special for supervision, in particular to a binocular fusion failure judgment method, a fusion function detection method and a system thereof.
Background
The fusion function detection and training is to project 2 visual targets to the left and right eyes of the testee respectively, so that the brain of the testee can combine the images of the 2 visual targets. The distance between the 2 sighting marks is changed until the brain fusion of the testee fails, 2 independent sighting mark images are formed, and the fusion range of the testee can be determined according to the relative displacement of the 2 sighting marks.
The traditional fusion function detection mainly comprises two detection modes, wherein one mode is that 2 similar visual targets with recognizable difference are projected to eyes of a testee, and then 2 graphs with different characteristics are seen according to subjective reflection of the testee to judge fusion failure. The method has the problems that the real image is not fed back faithfully subjectively or how to match the inspection process is not understood, and the like, so that the inspection result is inaccurate. For example, when the user wants to obtain a good examination result, it is possible to feed back that only one image is seen after 2 images have been viewed.
The other method is that 2 identical pictures or two visual targets with depth characteristics after image combination are projected to eyes of a tested person, and then the depth graph after image combination can be seen according to subjective reflection of the tested person or the depth graph after image combination can be objectively and correctly identified to judge the image fusion failure. This approach requires that the user has depth perception as a precondition, and depth perception as a level III function for binocular single vision, which is not possessed by some users having the fusion function, so that the fusion range cannot be objectively checked without stereoscopic vision.
Disclosure of Invention
In order to solve the above technical problems, the present invention provides a binocular fusion failure determination method, a fusion function detection method, and a system thereof, which facilitate the observation of images by a subject and reduce the influence of subjective factors of the subject on the detection.
The technical scheme is as follows:
in a first aspect, a method for determining binocular fusion failure is provided, including:
step 1, randomly selecting 2 annular sighting marks with different gaps;
step 2, projecting the 2 annular sighting marks to the left eye and the right eye of the testee respectively;
step 3, determining the observation result of the testee and the image combination result after the superposition of the 2 annular sighting marks;
step 4, judging whether the observation result is consistent with the image combination result;
if the two annular sighting marks are consistent, randomly selecting an updating mode to update the 2 annular sighting marks, and returning to the step 2 after adjusting the distance between the 2 annular sighting marks;
if not, entering step 5;
step 5, judging whether the observation result and the image combination result are continuous and inconsistent;
if not, randomly selecting an updating mode to update 2 annular sighting marks, and returning to the step 2;
if yes, judging that the binocular fusion is failed.
With reference to the first aspect, in a first implementation manner of the first aspect, the step 1 includes a step of detecting a simultaneous perceptual coincidence site, where the simultaneous perceptual coincidence site is used to determine initial positions of the 2 circular optotypes.
With reference to the first implementable manner of the first aspect, in a second implementable manner of the first aspect, the detecting of the simultaneous perceptual coincidence site comprises:
s1, selecting 2 visual targets which have obvious difference and are easy to identify the geometric center to form an inspection image, and placing the large visual target and the small visual target at random positions near the center of the inspection image;
step S2, projecting the large visual target and the small visual target to the left eye and the right eye of the testee respectively;
step S3, moving the small sighting target until the testee observes that the center of the small sighting target is overlapped with the center of the large sighting target, and recording the relative offset direction, the horizontal offset and the vertical offset of the large sighting target and the small sighting target at the moment;
step S4, changing the random positions of the large visual target and the small visual target in the inspection image, and interchanging the eye type of the large visual target and the small visual target;
step S5, moving the small sighting target again until the testee observes that the center of the small sighting target is superposed with the center of the large sighting target, and recording the relative offset direction, the horizontal offset and the vertical offset of the large sighting target and the small sighting target at the moment;
and step S6, determining the simultaneous perception coincidence site of the two eyes through the relative offset direction, the horizontal offset and the vertical offset obtained twice.
With reference to the first aspect, in a third implementable manner of the first aspect, the observation result and the image combination result include a notch position and/or a notch number, and in step 4, it is determined whether the image combination result and the observation result are consistent according to the notch position and/or the notch number.
With reference to the first aspect, in a fourth implementable manner of the first aspect, the updating includes randomly rotating the 2 circular optotypes by M degrees, where M is an integer multiple of 90 degrees.
With reference to the fourth implementable manner of the first aspect, in a fifth implementable manner of the first aspect, the updating manner includes randomly selecting another circular visual target from the remaining circular visual targets, and replacing any 1 or 2 of the 2 circular visual targets.
In a second aspect, a binocular fusion function detecting method is provided, including:
after the processing terminal acquires the starting information through the input terminal, randomly selecting 2 annular sighting marks with different gaps to form a test image;
the processing terminal respectively projects 2 annular sighting marks in the test image to the left eye and the right eye of the testee through the vision separating system;
the processing terminal acquires the observation result of the testee through the input terminal to judge the fusion failure;
and the processing terminal determines the fusion range of the two eyes according to the relative displacement of the 2 annular sighting marks in the test image when the fusion is judged to fail.
With reference to the second aspect, in a first implementable manner of the second aspect, the processing terminal generates the test image by:
the processing terminal determines a simultaneous perceptual coincidence site of a testee;
randomly selecting 2 annular sighting marks with different gaps;
and setting initial positions of 2 annular sighting marks according to the simultaneous perception sites to generate the test image.
In a third aspect, a binocular fusion function detecting system is provided with:
the input terminal is used for inputting starting information;
the processing terminal is used for generating a test image according to the starting information to judge the fusion failure, and when the processing terminal judges that the fusion fails, the processing terminal determines the fusion range of the testee according to the relative displacement of the 2 annular sighting marks in the test image;
and the visual system is used for projecting the 2 annular visual targets in the test image to the left eye and the right eye of the testee respectively.
With reference to the third aspect, in a first implementable manner of the third aspect, the processing terminal generating a test image according to the start information to perform fusion failure determination includes:
step 1, randomly selecting 2 annular sighting marks with different gaps;
step 2, projecting the 2 annular sighting marks to the left eye and the right eye of the testee respectively;
step 3, determining the observation result of the testee and the image combination result after the superposition of the 2 annular sighting marks;
step 4, judging whether the observation result is consistent with the image combination result;
if the two annular sighting marks are consistent, randomly selecting an updating mode to update the 2 annular sighting marks, and returning to the step 2 after adjusting the distance between the 2 annular sighting marks;
if not, entering step 5;
step 5, judging whether the observation result and the image combination result are continuous and inconsistent;
if not, randomly selecting an updating mode to update 2 annular sighting marks, and returning to the step 2;
if yes, judging that the binocular fusion is failed.
With reference to the first implementable manner of the third aspect, in a second implementable manner of the third aspect, the updating means includes visual target replacement, or visual target rotation, or visual target replacement and visual target rotation;
the visual target replacement comprises: the processing terminal randomly selects other annular sighting marks to replace the annular sighting marks in the test image;
the sighting mark rotation includes: the processing terminal randomly rotates 2 annular sighting marks in the test image by M degrees;
the visual target replacement and visual target rotation are as follows: and the processing terminal randomly selects other annular sighting marks to replace the annular sighting marks in the test image and randomly rotates 2 annular sighting marks in the replaced test image by M degrees.
With reference to the third aspect and any one of the first and second realizable manners of the third aspect, in a fourth realizable manner of the third aspect, the input terminal is further configured to input simultaneous perception detection information;
the processing terminal is also used for generating an inspection image for simultaneous perception coincidence site detection according to the simultaneous perception detection information;
the vision-dividing system is also used for projecting 2 detection sighting marks in the inspection image into the left eye and the right eye of the testee respectively.
With reference to the fourth implementable manner of the third aspect, in a fifth implementable manner of the third aspect, the generating, by the processing terminal, an inspection image for simultaneous perceptual registration site detection includes:
a1, selecting 2 visual targets with obvious difference and easy identification geometric center by the processing terminal to form an inspection image, and placing the large visual target and the small visual target at random positions near the center of the inspection image;
step A2, projecting the large visual target and the small visual target to the left eye and the right eye of the testee respectively through the vision separating system;
step A3, moving the small sighting target until the center of the small sighting target is overlapped with the center of the large sighting target observed by the testee, and recording the relative offset direction, the horizontal offset and the vertical offset of the large sighting target and the small sighting target at the moment;
step A4, changing the random positions of the large visual target and the small visual target in the inspection image, and interchanging the eye type of the large visual target and the small visual target;
step A5, moving the small sighting target again until the testee observes that the center of the small sighting target is superposed with the center of the large sighting target, and recording the relative offset direction, the horizontal offset and the vertical offset of the large sighting target and the small sighting target at the moment;
and step A6, determining the simultaneous perception coincidence site of the two eyes through the relative offset direction, the horizontal offset and the vertical offset obtained twice.
With reference to the third aspect, in a sixth implementable manner of the third aspect, the vision separating system is provided with a display and polarized vision separating glasses, or a display and color filter vision separating glasses, or VR glasses, or a synoptophore.
Has the advantages that:
1. the annular sighting mark for detection has a simple structure and is convenient for a testee to observe.
2. The randomness of the detection image observed by the testee is enhanced, the influence of subjective factors on the detection is reduced, and the detection result is more objective and accurate.
3. The fusion failure judgment is carried out through the number and the positions of the notches, so that the expression of the testee is facilitated.
4. Human-computer interaction in the detection training process is enhanced, and interestingness in the training process is improved.
5. The system automatically judges the binocular fusion condition of the testee without the participation of other people in the detection process, and the testee can conveniently and automatically carry out detection, so that remote self-detection based on the network is realized.
Drawings
FIG. 1 is a flow chart of a method for determining fusion failure according to the present invention;
FIG. 2 is a flow chart of a method for detecting simultaneous perceptual coincidence sites according to the present invention;
FIG. 3 is a graphical illustration of 4 ring targets of the present invention;
FIG. 4 is a diagram of a merged image of a detection optotype for simultaneous perceptual registration sites according to the present invention;
FIG. 5 is a schematic view of a2 ring optotype composite image according to the present invention;
FIG. 6 is a schematic diagram of the combined image after the ring-shaped sighting mark Pic3 is rotated 90 degrees counterclockwise in FIG. 5;
FIG. 7 is a schematic diagram of the combined image after the ring-shaped sighting mark Pic3 is rotated clockwise by 90 degrees in FIG. 5;
fig. 8 is a schematic diagram of a merged image after replacing the annular sighting target Pic2 of the annular sighting target in fig. 5;
FIG. 9 is a flow chart of a detection method of the binocular fusion function detection system;
FIG. 10 is a flow chart of a process terminal generating a test image;
fig. 11 is a block diagram showing the system configuration of the binocular fusion function detecting system.
Detailed Description
The invention is further illustrated by the following examples and figures.
In an embodiment, as shown in fig. 1, a flowchart of a method for determining a binocular fusion failure includes:
step 1, randomly selecting 2 annular sighting marks with different gaps;
step 2, projecting the 2 annular sighting marks to the left eye and the right eye of the testee respectively;
step 3, determining the observation result of the testee and the image combination result after the superposition of the 2 annular sighting marks;
step 4, judging whether the observation result is consistent with the image combination result;
if the two annular sighting marks are consistent, randomly selecting an updating mode to update the 2 annular sighting marks, and returning to the step 2 after adjusting the distance between the 2 annular sighting marks;
if not, entering step 5;
step 5, judging whether the observation result and the image combination result are continuous and inconsistent;
if not, randomly selecting an updating mode to update 2 annular sighting marks, and returning to the step 2;
if yes, judging that the binocular fusion is failed.
Specifically, as shown in the schematic diagram of 4 annular sighting marks image shown in fig. 3, each annular sighting mark is a ring formed by a plurality of dots, a gap is arranged on the ring, and the number of the gaps of each annular sighting mark is different, so that the structure of each annular sighting mark is very simple and convenient to observe.
From these ring targets, 2 ring targets with different numbers of gaps were randomly selected, such as the ring target Pic2 and the ring target Pic3 in fig. 3. The ring optotype Pic2 and the ring optotype Pic3 were projected to the left and right eyes of the subject to obtain the observation results of the subject.
As shown in fig. 5, if the subject initially observes the result consistent with the resultant image Pic5 in fig. 5. And updating 2 annular sighting marks, relatively horizontally moving the 2 annular sighting marks, reducing the distance between the 2 annular sighting marks, and re-determining the observation result and the image combination result of the testee. And circulating the steps until the observation result of the testee is inconsistent with the image combination result.
Because the annular sighting target is randomly selected and the updating mode is randomly selected to update the annular sighting target after each judgment before the failure of image fusion is not judged, the whole judgment process has strong randomness, the influence of subjective factors of a testee on an observation result is reduced, and the detection result is more accurate.
In this embodiment, preferably, the step 1 includes a step of detecting a simultaneous perceptual coincidence site, where the simultaneous perceptual coincidence site is used to determine initial positions of the 2 annular visual targets, so as to facilitate evaluation of a binocular visual inhibition status of an strabismus patient, especially a strabismus patient accompanied by amblyopia, after strabismus surgery, and help a doctor to better select a time and a scheme for performing amblyopia training or strabismus surgery.
Preferably, as shown in fig. 2, the step of detecting the simultaneous perceptual coincidence site includes: the method comprises the following steps:
s1, selecting 2 visual targets which have obvious difference and are easy to identify the geometric center to form an inspection image, and placing the large visual target and the small visual target at random positions near the center of the inspection image;
step S2, projecting the large visual target and the small visual target to the left eye and the right eye of the testee respectively;
step S3, moving the small sighting target until the testee observes that the center of the small sighting target is overlapped with the center of the large sighting target, and recording the relative offset direction, the horizontal offset and the vertical offset of the large sighting target and the small sighting target at the moment;
step S4, changing the random positions of the large visual target and the small visual target in the inspection image, and interchanging the eye type of the large visual target and the small visual target;
step S5, moving the small sighting target again until the testee observes that the center of the small sighting target is superposed with the center of the large sighting target, and recording the relative offset direction, the horizontal offset and the vertical offset of the large sighting target and the small sighting target at the moment;
and step S6, determining the simultaneous perception coincidence site of the two eyes through the relative offset direction, the horizontal offset and the vertical offset obtained twice.
Specifically, as shown in fig. 4, in which the optotype TestPic1 is a large optotype and the optotype TestPic2 is a small optotype, the large optotype and the small optotype are projected to the left and right eyes of the examinee through the vision separation system, and the optotype TestPic2 is moved until the examinee observes an image as shown in TestPic3, at which time the test terminal records the relative shift directions, the horizontal shift amounts, and the vertical shift amounts of the large and small optotypes. In order to more accurately detect the simultaneous perception coincidence site of the testee, the invention restores the large visual target and the small visual target to the initial positions, changes the random positions of the 2 visual targets, projects the positions to the right eye and the left eye of the testee through the vision separating system, and repeats the process. And determining the simultaneous perceptual coincidence site of the testee according to the relative offset direction, the horizontal offset and the vertical offset between the large visual target and the small visual target obtained by two tests.
In this embodiment, preferably, the observation result and the image combination result include a notch position and/or a notch number, and in step 4, whether the image combination result and the observation result are consistent is determined according to the notch position and/or the notch number.
Specifically, as shown in fig. 5, the aggregate image of the annular optotype Pic2 and the annular optotype Pic3 superimposed is shown as the optotype Pic5, and the notch position is on the right. If the position of the gap of the observed result expressed by the testee is also right, the observed result is consistent with the image combination result, otherwise, the observed result is inconsistent. The position of the notch is easy to identify, so that the position of the notch is adopted for judgment, the observation of a testee is facilitated, and the observed combined image is convenient to describe.
The principle of determining whether the observation result and the image combination result are consistent through the number of the notches is the same as the principle of determining whether the observation result and the image combination result are consistent through the positions of the notches, and details are not repeated here. The number of the notches and the positions of the notches are combined, so that the judgment factors can be increased, the judgment randomness is improved, and the judgment result is more accurate.
In this embodiment, preferably, the updating manner further includes randomly rotating the 2 circular optotypes by M degrees, where M is an integer multiple of 90 degrees.
Specifically, the notch positions of the 2 circular optotypes may be changed by randomly rotating 1 or 2 of the 2 circular optotypes by 0 degree, 90 degrees, 180 degrees, and 270 degrees, thereby changing the notch positions of the resultant image. If the ring-shaped sighting target Pic3 in fig. 5 is changed to rotate 90 degrees counterclockwise, the result of the coincidence of the overlapped 2 ring-shaped sighting targets becomes as shown by the sighting target Pic6 in fig. 6, and the notch position is changed from right to down.
As shown in fig. 7, when the annular optotype Pic3 in fig. 5 is rotated 180 degrees clockwise, the annular optotype Pic2 and the annular optotype Pic3 are combined to form an image as shown in the optotype Pic7 in fig. 7, the number of notches is changed from 1 to 2, and the notch positions are also changed to be lower and right. Therefore, whether the observation result is consistent with the image combination result can be judged through the number of the notches. Convenient for the testee to observe and express.
Therefore, the randomness in the detection process can be improved, and the influence of subjective factors of the testee is reduced. And the annular sighting target can not cause great difference of travel of the image combination result after rotating, so that the condition that the brain cannot combine images is avoided.
The second embodiment and the second embodiment are substantially the same as the first embodiment, and the main differences are as follows: the updating mode comprises the step of randomly selecting other annular visual targets from the rest annular visual targets to replace any 1 or 2 of the 2 annular visual targets.
Specifically, as shown in fig. 8, the annular optotype Pic4 selected from fig. 3 is substituted for the annular optotype Pic2 in fig. 5, and the image obtained by combining the annular optotype Pic4 with the annular optotype Pic3 is shown as the optotype Pic8 in fig. 8, and the number and positions of the notches are changed, so that different observation results and combination results are generated.
The third embodiment and the third embodiment are substantially the same as the first embodiment, and the main differences are as follows: the updating mode also comprises the step of randomly selecting other annular visual targets from the rest annular visual targets to replace any 1 or 2 of the 2 annular visual targets.
Specifically, other annular visual targets are selected from the remaining annular visual targets, the annular visual targets used previously are replaced, and then the replaced 2 annular visual targets are rotated, so that the randomness of the image combination result is enhanced, the influence of subjective factors on the judgment result is reduced, and the judgment accuracy is improved.
Fig. 9 is a flowchart of a binocular fusion function detecting method, which includes:
after the processing terminal acquires the starting information through the input terminal, randomly selecting 2 annular sighting marks with different gaps to form a test image;
the processing terminal respectively projects 2 annular sighting marks in the test image to the left eye and the right eye of the testee through the vision separating system;
the processing terminal acquires the observation result of the testee through the input terminal to judge the fusion failure;
and the processing terminal determines the fusion range of the two eyes according to the relative displacement of the 2 annular sighting marks in the test image when the fusion is judged to fail.
Specifically, the testee can input starting information through the input terminal, the processing terminal randomly selects 2 annular visual targets with different numbers from a memory of the processing terminal after receiving the starting information to form a test image, and sends the test image to the vision separating system, and the vision separating system respectively projects the 2 annular visual targets in the test image to the left eye and the right eye of the testee, so that the brain of the testee can carry out image combination.
The examinee inputs an observation result through the input terminal, the processing terminal performs binocular fusion failure judgment according to the fusion failure judgment method to determine a corresponding test image when binocular fusion of the examinee fails, the processing terminal determines the relative displacement of the 2 annular sighting marks according to the corresponding test image at the moment, and determines the fusion range of the two eyes of the examinee according to the relative displacement.
The relative displacement is the horizontal displacement of the 2 annular sighting marks relative to the corresponding initial positions when the fusion image fails, and the horizontal displacement can be determined by setting the displacement of each movement of the annular sighting marks and finally according to the movement times of the annular sighting marks.
Preferably, as shown in fig. 10, the processing terminal generates the test image by the following method:
the processing terminal determines a simultaneous perceptual coincidence site of a testee;
randomly selecting 2 annular sighting marks with different gaps;
and setting initial positions of 2 annular sighting marks according to the simultaneous perception sites to generate the test image.
Specifically, after the processing terminal determines the simultaneous perceptual coincidence site of the testee by using the simultaneous perceptual coincidence site detection method, the initial positions of 2 annular sighting marks in the test image can be set according to the simultaneous perceptual coincidence site, so that the inhibition of dominant eyes of the testee on the amblyopia eyes is weakened, and the detection accuracy is improved. Moreover, the processing terminal forms the test image by the randomly selected annular sighting target, so that the randomness of the image observed by the testee can be improved, and the influence of subjective factors of the testee on the detection is reduced.
As shown in fig. 11, the system structure block diagram of the binocular fusion function detecting system is provided with:
the input terminal is used for inputting starting information;
the processing terminal is used for generating a test image according to the starting information to judge the fusion failure, and when the processing terminal judges that the fusion fails, the processing terminal determines the fusion range of the testee according to the relative displacement of the 2 annular sighting marks in the test image;
and the visual system is used for projecting the 2 annular visual targets in the test image to the left eye and the right eye of the testee respectively.
Specifically, the control instruction for starting the fusion function detection may be input through an input terminal, which may be a common input device, such as a keyboard, a touch screen, or the like. The processing terminal can generate a monitoring image to judge the image fusion failure after acquiring the starting information sent by the input terminal until the image fusion of the testee fails. At the moment, the processing terminal can determine the fusion range of the testee according to the corresponding test image. The processing terminal may be an intelligent terminal, such as a PC host, a mobile phone terminal, or the like, having the same application as the above-described fusion failure determination method.
Preferably, the processing terminal generates a test image according to the start information to perform fusion failure determination, and includes:
step 1, randomly selecting 2 annular sighting marks with different gaps to form a test image;
step 2, projecting the 2 annular sighting marks to the left eye and the right eye of the testee respectively;
step 3, determining the observation result of the testee and the image combination result after the superposition of the 2 annular sighting marks;
step 4, judging whether the observation result is consistent with the image combination result;
if the two annular sighting marks are consistent, randomly selecting an updating mode to update the 2 annular sighting marks, and returning to the step 2 after adjusting the distance between the 2 annular sighting marks;
if not, entering step 5;
step 5, judging whether the observation result and the image combination result are continuous and inconsistent;
if not, randomly selecting an updating mode to update 2 annular sighting marks, and returning to the step 2;
if yes, judging that the binocular fusion is failed.
Specifically, if the subject is successfully imaged, the observation result input by the input terminal should be consistent with the combined image result obtained after the 2 annular sighting marks are actually superposed. Therefore, by comparing the consistency between the observation result and the image combination result, whether the image fusion of the testee is unsuccessful can be judged.
Considering that some errors such as error-input observation results and the like may occur in the test process of the testee, the image fusion failure of the testee is finally judged by repeatedly judging for many times until the continuity of the observation results and the image fusion results of the testee is inconsistent, so that the influence of the errors on the results can be reduced, and the accuracy of the judgment results is improved.
And moreover, the 2 sighting marks are updated by randomly selecting an updating mode and the distance between the annular sighting marks is adjusted, so that the randomness of the image combination result can be improved, the influence of subjective factors on the judgment result is weakened, and the accuracy of the judgment result is further improved.
Preferably, the updating mode comprises visual target replacement, or visual target rotation, or visual target replacement and visual target rotation.
The visual target replacement comprises: the processing terminal randomly selects other annular sighting marks to replace the annular sighting marks in the test image;
the sighting mark rotation includes: the processing terminal randomly rotates 2 annular sighting marks in the test image by M degrees;
the visual target replacement and visual target rotation are as follows: and the processing terminal randomly selects other annular sighting marks to replace the annular sighting marks in the test image and randomly rotates 2 annular sighting marks in the replaced test image by M degrees.
After the 2 annular sighting marks in the test image are updated, the randomness of the image combination result of the test image for subsequent judgment of the fusion failure can be improved, the influence of subjective factors on the judgment result is weakened, and the accuracy of the judgment result is further improved.
Specifically, the other annular sighting marks are selected from the other annular sighting marks for replacement, so that the situation that the same sighting mark is selected for replacement can be avoided. The randomness of detection can be improved by randomly rotating the angle of the annular sighting mark.
Preferably, the input terminal is further configured to input simultaneous perception detection information;
the processing terminal is also used for generating an inspection image for simultaneous perception coincidence site detection according to the simultaneous perception detection information;
the vision-dividing system is also used for projecting 2 detection sighting marks in the inspection image into the left eye and the right eye of the testee respectively.
Specifically, the processing terminal can also perform simultaneous perception coincidence site detection according to the simultaneous perception detection information input by the input terminal, so that when a test image is generated, the initial positions of the 2 annular sighting marks are determined, the inhibition of the dominant eye of the testee on the vulnerable eye is weakened, and the detection accuracy is improved.
Preferably, the processing terminal generates an inspection image for simultaneous perceptual registration site detection, and includes:
a1, selecting 2 visual targets with obvious difference and easy identification geometric center by the processing terminal to form an inspection image, and placing the large visual target and the small visual target at random positions near the center of the inspection image;
step A2, projecting the large visual target and the small visual target to the left eye and the right eye of the testee respectively through the vision separating system;
step A3, moving the small sighting target until the center of the small sighting target is overlapped with the center of the large sighting target observed by the testee, and recording the relative offset direction, the horizontal offset and the vertical offset of the large sighting target and the small sighting target at the moment;
step A4, changing the random positions of the large visual target and the small visual target in the inspection image, and interchanging the eye type of the large visual target and the small visual target;
step A5, moving the small sighting target again until the testee observes that the center of the small sighting target is superposed with the center of the large sighting target, and recording the relative offset direction, the horizontal offset and the vertical offset of the large sighting target and the small sighting target at the moment;
and step A6, determining the simultaneous perception coincidence site of the two eyes through the relative offset direction, the horizontal offset and the vertical offset obtained twice.
Particularly, the processing terminal is beneficial to evaluating the binocular vision inhibition condition of the strabismus patients, especially the strabismus patients accompanied with amblyopia, and helps doctors to better select the time and scheme for performing amblyopia training or strabismus surgery through simultaneous perception coincidence site detection.
In this embodiment, preferably, the vision separating system is provided with VR glasses, or a synoptophore, or a display and red and blue glasses, or a display and polarized glasses.
When the vision separating system is provided with VR glasses, the processing terminal sends the generated test image to the VR glasses, and the VR glasses respectively display 2 annular sighting marks in the inspection image on the display lenses corresponding to the left eye and the right eye of the testee so as to achieve the vision separating effect.
When the vision separating system is provided with the synoptophore, the processing terminal sends the test image to the synoptophore, and the synoptophore adopts the method same as the existing vision separating principle to respectively project the 2 annular sighting marks to the left eye and the right eye of the testee.
When the vision separating system is provided with a display and red and blue glasses, the processing terminal sends the test image to the display, and the display changes the colors of the 2 annular sighting marks, so that the two eyes of the testee can respectively observe the annular sighting marks with the corresponding colors through the red and blue glasses.
Finally, it should be noted that the above-mentioned description is only a preferred embodiment of the present invention, and those skilled in the art can make various similar representations without departing from the spirit and scope of the present invention.

Claims (14)

1. A method for determining binocular fusion failure, comprising:
step 1, randomly selecting 2 annular sighting marks with different gaps;
step 2, projecting the 2 annular sighting marks to the left eye and the right eye of the testee respectively;
step 3, determining the observation result of the testee and the image combination result after the superposition of the 2 annular sighting marks;
step 4, judging whether the observation result is consistent with the image combination result;
if the two annular sighting marks are consistent, randomly selecting an updating mode to update the 2 annular sighting marks, and returning to the step 2 after adjusting the distance between the 2 annular sighting marks;
if not, entering step 5;
step 5, judging whether the observation result and the image combination result are continuous and inconsistent;
if not, randomly selecting an updating mode to update 2 annular sighting marks, and returning to the step 2;
if so, judging that the binocular fusion fails.
2. The method for determining binocular fusion failure as claimed in claim 1, wherein the step 1 includes a step of detecting a simultaneous perceptual registration site, and the simultaneous perceptual registration site is used to determine initial positions of the 2 annular visemes.
3. The method for determining binocular fusion failure according to claim 2, wherein the step of detecting the simultaneous perceptual registration sites comprises:
s1, selecting 2 visual targets which have obvious difference and are easy to identify the geometric center to form an inspection image, and placing the large visual target and the small visual target at random positions near the center of the inspection image;
step S2, projecting the large visual target and the small visual target to the left eye and the right eye of the testee respectively;
step S3, moving the small sighting target until the testee observes that the center of the small sighting target is overlapped with the center of the large sighting target, and recording the relative offset direction, the horizontal offset and the vertical offset of the large sighting target and the small sighting target at the moment;
step S4, changing the random positions of the large visual target and the small visual target in the inspection image, and interchanging the eye type of the large visual target and the small visual target;
step S5, moving the small sighting target again until the testee observes that the center of the small sighting target is superposed with the center of the large sighting target, and recording the relative offset direction, the horizontal offset and the vertical offset of the large sighting target and the small sighting target at the moment;
and step S6, determining the simultaneous perception coincidence site of the two eyes through the relative offset direction, the horizontal offset and the vertical offset obtained twice.
4. The method for determining binocular fusion failure according to claim 1, wherein the observation result and the fusion result include a notch position and/or a notch number, and in the step 4, it is determined whether the fusion result and the observation result are consistent according to the notch position and/or the notch number.
5. The method of claim 1, wherein the updating comprises randomly rotating the 2 ring-shaped optotypes by M degrees, wherein M is an integer multiple of 90 degrees.
6. The method for determining binocular fusion failure according to claim 1 or 5, wherein the updating means includes randomly selecting another annular visual target from the remaining annular visual targets to replace any 1 or 2 of the 2 annular visual targets.
7. A binocular fusion function detecting method, comprising:
after the processing terminal acquires the starting information through the input terminal, randomly selecting 2 annular sighting marks with different gaps to form a test image;
the processing terminal respectively projects 2 annular sighting marks in the test image to the left eye and the right eye of the testee through the vision separating system;
the processing terminal acquires the observation result of the testee through the input terminal to judge the fusion failure;
and the processing terminal determines the fusion range of the two eyes according to the relative displacement of the 2 annular sighting marks in the test image when the fusion is judged to fail.
8. The binocular fusion function detecting method according to claim 7, wherein: the processing terminal generates a test image by adopting the following method:
the processing terminal determines a simultaneous perceptual coincidence site of a testee;
randomly selecting 2 annular sighting marks with different gaps;
and setting initial positions of 2 annular sighting marks according to the simultaneous perception sites to generate the test image.
9. A binocular fusion function detecting system is provided with:
the input terminal is used for inputting starting information;
the processing terminal is used for generating a test image according to the starting information to judge the fusion failure, and when the processing terminal judges that the fusion fails, the processing terminal determines the fusion range of the testee according to the relative displacement of the 2 annular sighting marks in the test image;
and the visual system is used for projecting the 2 annular visual targets in the test image to the left eye and the right eye of the testee respectively.
10. The binocular fusion function detecting system according to claim 9, wherein the processing terminal generates a test image according to the start information to perform fusion failure determination, including:
step 1, randomly selecting 2 annular sighting marks with different gaps to form a test image;
step 2, projecting the 2 annular sighting marks to the left eye and the right eye of the testee respectively;
step 3, determining the observation result of the testee and the image combination result after the superposition of the 2 annular sighting marks;
step 4, judging whether the observation result is consistent with the image combination result;
if the two annular sighting marks are consistent, randomly selecting an updating mode to update the 2 annular sighting marks, and returning to the step 2 after adjusting the distance between the 2 annular sighting marks;
if not, entering step 5;
step 5, judging whether the observation result and the image combination result are continuous and inconsistent;
if not, randomly selecting an updating mode to update 2 annular sighting marks, and returning to the step 2;
if yes, judging that the binocular fusion is failed.
11. The binocular fusion function detecting system according to claim 10, wherein the updating means includes optotype replacement, or optotype rotation, or optotype replacement and optotype rotation;
the visual target replacement comprises: the processing terminal randomly selects other annular sighting marks to replace the annular sighting marks in the test image;
the sighting mark rotation includes: the processing terminal randomly rotates 2 annular sighting marks in the test image by M degrees;
the visual target replacement and visual target rotation are as follows: and the processing terminal randomly selects other annular sighting marks to replace the annular sighting marks in the test image and randomly rotates 2 annular sighting marks in the replaced test image by M degrees.
12. The binocular fusion function detecting system according to any one of claims 9 to 11, wherein:
the input terminal is also used for inputting simultaneous perception detection information;
the processing terminal is also used for generating an inspection image for simultaneous perception coincidence site detection according to the simultaneous perception detection information;
the vision-dividing system is also used for projecting 2 detection sighting marks in the inspection image into the left eye and the right eye of the testee respectively.
13. The binocular fusion function detecting system of claim 12, wherein: the processing terminal generates an inspection image for simultaneous perception coincidence site detection, and comprises:
a1, selecting 2 visual targets with obvious difference and easy identification geometric center by the processing terminal to form an inspection image, and placing the large visual target and the small visual target at random positions near the center of the inspection image;
step A2, projecting the large visual target and the small visual target to the left eye and the right eye of the testee respectively through the vision separating system;
step A3, moving the small sighting target until the center of the small sighting target is overlapped with the center of the large sighting target observed by the testee, and recording the relative offset direction, the horizontal offset and the vertical offset of the large sighting target and the small sighting target at the moment;
step A4, changing the random positions of the large visual target and the small visual target in the inspection image, and interchanging the eye type of the large visual target and the small visual target;
step A5, moving the small sighting target again until the testee observes that the center of the small sighting target is superposed with the center of the large sighting target, and recording the relative offset direction, the horizontal offset and the vertical offset of the large sighting target and the small sighting target at the moment;
and step A6, determining the simultaneous perception coincidence site of the two eyes through the relative offset direction, the horizontal offset and the vertical offset obtained twice.
14. The binocular fusion function detecting system according to claim 9, wherein the vision separating system is provided with a display and polarized vision separating glasses, or a display and color filter vision separating glasses, or VR glasses, or a synoptophore.
CN201911013107.9A 2019-10-23 2019-10-23 Binocular fusion failure judgment method, fusion function detection method and system Active CN110604540B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911013107.9A CN110604540B (en) 2019-10-23 2019-10-23 Binocular fusion failure judgment method, fusion function detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911013107.9A CN110604540B (en) 2019-10-23 2019-10-23 Binocular fusion failure judgment method, fusion function detection method and system

Publications (2)

Publication Number Publication Date
CN110604540A true CN110604540A (en) 2019-12-24
CN110604540B CN110604540B (en) 2021-08-24

Family

ID=68895025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911013107.9A Active CN110604540B (en) 2019-10-23 2019-10-23 Binocular fusion failure judgment method, fusion function detection method and system

Country Status (1)

Country Link
CN (1) CN110604540B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110916991A (en) * 2019-10-22 2020-03-27 温州医科大学附属眼视光医院 Personalized visual training method and training device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100998492A (en) * 2006-12-31 2007-07-18 温州医学院眼视光研究院 Detector for detecting binocular vision unequal images
CN101530316A (en) * 2009-03-06 2009-09-16 南昌航空大学 Objective quantitative measurement method of monocular diplopia
JP2012044308A (en) * 2010-08-16 2012-03-01 Sharp Corp Three dimensional image output device and three dimensional image display device
US20120249951A1 (en) * 2011-03-31 2012-10-04 Nidek Co., Ltd. Optotype presenting apparatus
CN102813500A (en) * 2012-08-07 2012-12-12 北京嘉铖视欣数字医疗技术有限公司 Perception correcting and training system on basis of binocular integration
CN202776260U (en) * 2012-08-07 2013-03-13 北京嘉铖视欣数字医疗技术有限公司 Sentience fusion correction and training system based on both eyes
CN203042874U (en) * 2013-01-08 2013-07-10 封利霞 Human eye stereopsis perceptual learning training instrument
CN105455774A (en) * 2015-11-17 2016-04-06 中山大学中山眼科中心 Psychophysical measurement method for controlling lower aniseikonia on basis of interocular contrast ratio
CN105796050A (en) * 2016-05-28 2016-07-27 陈登民 Centesimal visual acuity chart
CN109966130A (en) * 2019-05-13 2019-07-05 广州视景医疗软件有限公司 The method and its system of a kind of eyes comparison degree of balance measurement during visual performance is trained

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100998492A (en) * 2006-12-31 2007-07-18 温州医学院眼视光研究院 Detector for detecting binocular vision unequal images
CN101530316A (en) * 2009-03-06 2009-09-16 南昌航空大学 Objective quantitative measurement method of monocular diplopia
JP2012044308A (en) * 2010-08-16 2012-03-01 Sharp Corp Three dimensional image output device and three dimensional image display device
US20120249951A1 (en) * 2011-03-31 2012-10-04 Nidek Co., Ltd. Optotype presenting apparatus
CN102727171A (en) * 2011-03-31 2012-10-17 尼德克株式会社 Optotype presenting apparatus
CN102813500A (en) * 2012-08-07 2012-12-12 北京嘉铖视欣数字医疗技术有限公司 Perception correcting and training system on basis of binocular integration
CN202776260U (en) * 2012-08-07 2013-03-13 北京嘉铖视欣数字医疗技术有限公司 Sentience fusion correction and training system based on both eyes
CN203042874U (en) * 2013-01-08 2013-07-10 封利霞 Human eye stereopsis perceptual learning training instrument
CN105455774A (en) * 2015-11-17 2016-04-06 中山大学中山眼科中心 Psychophysical measurement method for controlling lower aniseikonia on basis of interocular contrast ratio
CN105796050A (en) * 2016-05-28 2016-07-27 陈登民 Centesimal visual acuity chart
CN109966130A (en) * 2019-05-13 2019-07-05 广州视景医疗软件有限公司 The method and its system of a kind of eyes comparison degree of balance measurement during visual performance is trained

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110916991A (en) * 2019-10-22 2020-03-27 温州医科大学附属眼视光医院 Personalized visual training method and training device

Also Published As

Publication number Publication date
CN110604540B (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN110604541B (en) Binocular balance detection system and detection method thereof
CA2914456C (en) System and method for measurement of refractive error of an eye based on subjective distance metering
CN107847122B (en) Vision field measuring method, vision field measuring device, and vision examination optotype
KR101966164B1 (en) System and method for ophthalmolgic test using virtual reality
EP2561799B1 (en) Visual function testing device
EP2209414B1 (en) A method for performing visual acuity testing
IL266461B2 (en) System for integrally measuring clinical parameters of visual function
CN105916432B (en) The method shown for optotype
CN111248851B (en) Visual function self-testing method
US9492075B2 (en) Prism prescription value acquisition system, acquisition method, acquisition apparatus and program for correcting fixation disparity
US8752964B1 (en) System for clinical examination of visual functions using lenticular optics or programmable displays
CA3126955C (en) System and method for digital measurement of stereo vision
CN110604540B (en) Binocular fusion failure judgment method, fusion function detection method and system
Allen et al. Ventral extra-striate cortical areas are required for optimal orientation averaging
CN110604542B (en) Visual suppression determination method and system
JP2014121643A (en) Ophthalmologic apparatus
US20170332947A1 (en) System and methods for diplopia assessment
Giersch et al. A two-stage account of computing and binding occluded and visible contours: Evidence from visual agnosia and effects of lorazepam
US9131838B1 (en) System for clinical examination of visual functions using lenticular optics or programmable displays
JPH11155813A (en) Inspecting device for visual function
CN116725473B (en) Dynamic stereoscopic vision detection device and method
CN109700424B (en) Multi-person simultaneous vision detection and monitoring management method and system
AU744773C (en) Stereo optic disc analyser

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210709

Address after: Room 214-4-1, 115 Long'an Avenue, Tuzhu Town, Shapingba District, Chongqing 400000

Applicant after: CHONGQING NENGNENG TECHNOLOGY Co.,Ltd.

Address before: 400000 5-3, innovation productivity service building, Xiyong science and Technology Third Road, Shapingba District, Chongqing

Applicant before: CHONGQING KANGCUI PHARMACEUTICAL TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant