CN110897607A - Visual perception distortion data acquisition system and use method thereof - Google Patents
Visual perception distortion data acquisition system and use method thereof Download PDFInfo
- Publication number
- CN110897607A CN110897607A CN201910871636.6A CN201910871636A CN110897607A CN 110897607 A CN110897607 A CN 110897607A CN 201910871636 A CN201910871636 A CN 201910871636A CN 110897607 A CN110897607 A CN 110897607A
- Authority
- CN
- China
- Prior art keywords
- pattern
- user
- eye
- display module
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/02—Subjective types, i.e. testing apparatus requiring the active assistance of the patient
- A61B3/08—Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing binocular or stereoscopic vision, e.g. strabismus
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Ophthalmology & Optometry (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to a vision perception distortion data acquisition system and a using method thereof, wherein the system comprises a first display module, a second display module and a third display module, wherein the first display module is used for displaying a background pattern and a plurality of first patterns, and the first patterns are in a circumferential array; the second display module is used for displaying a second pattern, and the second pattern can move along with the dragging of a user; the coordinate acquisition module is used for acquiring the coordinates of the appointed point of the second pattern according to a confirmation command input by a user; all patterns displayed by the first display module can be seen by one eye through the visual separation device, and all patterns displayed by the second display module can be seen by the other eye through the visual separation device. The invention solves the problems of high professional requirement, long time consumption, complex process and large error of the prior visual perception distortion data acquisition for the acquirer, can carry out self-adaptive parameter adjustment according to the condition of the acquirer, is simple, convenient and quick, has high accuracy and good repeatability, and is suitable for large-scale crowds.
Description
Technical Field
The invention relates to the technical field of perceptual eye position data acquisition, in particular to a visual perception distortion data acquisition system and a using method thereof.
Background
The visual perception distortion is a visual perception function reflecting the state of deviation of eye position and the state of equilibrium of eyeball movement under the condition of binocular vision, that is, the control ability of the brain center on the position and movement of eyeball in the binocular viewing (binocular vision).
Binocular vision is a high-level and most perfect applicable performance of the cognitive environment in the process of the development of animals from low level to high level. When viewing external objects, the retina of each eye forms a separate visual image, nerve impulses caused by the two visual images are transmitted to the visual cortex of the brain, the visual signals of the two eyes are analyzed through the higher central nervous system of the brain, and the visual signals are integrated into a complete visual image, and the process is called binocular vision.
An eye deviation is a binocular vision defect. In clinical ophthalmology, the eye position refers to the position of an eyeball, and normally, the eye position should be kept in a normal position regardless of whether two eyes are simultaneously seen. Whether or not both eyes are present at the same time, the eye position exhibits a skew, which is referred to as a dominant eye position skew. If the eyes are in right position when the eyes are in simultaneous vision, and the eyes are deviated when the eyes are broken, the condition is called recessive eye deviation.
The standard method clinically used for examining the eye deviation at present is a triple prism alternate covering method, which is to alternately cover one eye, that is, observe the eye deviation of the other eye under the condition of single-eye observation, and obtain the final examination result after covering for several times. The method has high requirement on the specialty of inspectors, long measuring time, complex process and large error.
Disclosure of Invention
The present invention is directed to overcoming at least one of the above-mentioned disadvantages of the prior art, and providing a system and method for collecting perceptual distortion data, which can collect perceptual distortion data of a user from two dimensions of different time and space.
The technical scheme adopted by the invention is as follows:
a perceptual-distortion data acquisition system comprising:
the first display module is used for displaying a background pattern and a plurality of first patterns, and the first patterns are in a circumferential array;
the second display module is used for displaying a second pattern, and the second pattern can move along with the dragging of a user;
the coordinate acquisition module is used for acquiring the coordinates of the appointed point of the second pattern according to a confirmation command input by a user;
all patterns displayed by the first display module can be seen by one eye through the visual separation device, and all patterns displayed by the second display module can be seen by the other eye through the visual separation device.
The user can see the first pattern and the background pattern displayed on the first display module with one eye and the second pattern displayed on the second display module with the other eye through the vision-separating device, under the condition of binocular vision, the user drags the second pattern once for each first pattern, so that the appointed point of the second pattern after dragging is respectively coincided with the appointed point of each first pattern, each time when the user thinks that the appointed points are coincided, inputting a confirmation command, acquiring the coordinates of the designated point of the second pattern by the coordinate acquisition module according to the confirmation command input by the user, thereby obtaining the coordinates of the designated point of the second pattern a plurality of times, according to the displacement deviation and/or the angle deviation between the specified point coordinates of the second pattern obtained each time and the specified point coordinates of each corresponding first pattern, the degree of perceptual eye displacement and/or the direction of displacement of the user in time and space can be quantified.
Further, the first display module is specifically configured to initially display one first pattern, and after the coordinate acquisition module acquires the coordinates of the second pattern once, display the next first pattern, so that the coordinate acquisition module acquires the coordinates of the second pattern next time until all the first patterns are displayed.
Further, the plurality of first patterns are in n circumferential arrays, and n is an integer greater than 1.
Further, the first display module is specifically configured to initially display one circumferential array of the second pattern, and after the coordinate acquisition module acquires the coordinates of the second pattern once, display a next circumferential array of the first pattern, so that the coordinate acquisition module acquires the coordinates of the second pattern next time until all circumferential arrays of the second pattern are displayed.
Further, the perceptual distortion data acquisition system further includes:
and the track generation module is used for connecting lines according to the coordinates acquired by the coordinate acquisition module to generate a distorted track map, connecting lines according to the coordinates of the appointed points of the second pattern displayed by the first display module to generate a reference track map, and displaying the distorted track map and the reference track map.
According to the distorted track diagram and the reference track diagram, the visual perception distortion degree and the deviation direction of the user can be conveniently and intuitively seen.
Further, the perceptual distortion data acquisition system further includes:
and the vector generating module is used for calculating a deviation vector according to the coordinates acquired by the coordinate acquiring module and the coordinates of the appointed point of the second pattern displayed by the first display module, generating a deviation vector diagram according to the deviation vector and displaying the deviation vector diagram.
According to the deviation vector diagram, the visual perception distortion degree and the deviation direction of the user can be conveniently and intuitively seen.
Further, the perceptual distortion data acquisition system further includes:
the display control module is used for controlling the size and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the first pattern displayed by the first display module according to a display control command input by a user, and/or controlling the size and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the first pattern displayed by the first display module, and/or controlling the binocular disparity parameter and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the background pattern displayed by the third display module.
When the user feeds back that the first pattern and/or the second pattern cannot be seen, the display parameters can be adjusted by the display control module until the user feeds back that the first pattern and/or the second pattern can be seen, so that the visual perception distortion data of the user can be collected continuously according to the condition that the user sees the first pattern and/or the second pattern. And the acquired visual perception distortion data can reflect the control capability of the brain center on the eyeball position and the analysis capability on binocular visual signals more accurately by adjusting the display parameters and repeating the acquisition of the visual perception distortion data for multiple times.
Further, the perceptual distortion data acquisition system further includes:
and the visual separation control module is used for controlling the first pattern to be seen by the appointed eye through the visual separation device and the second pattern to be seen by the other eye through the visual separation device or controlling the second pattern to be seen by the appointed eye through the visual separation device and the first pattern to be seen by the other eye through the visual separation device according to the appointed eye input by the user.
When the user has unbalance of both eyes, the inferior eye of the user can watch the pattern displayed by the first display module, the superior eye of the user can watch the pattern displayed by the second display module, the first pattern can be controlled to be watched by the inferior eye of the user through the visual separation device, and the second pattern can be controlled to be watched by the superior eye of the user through the visual separation device through the visual separation control module. When the user is in binocular balance, the user can exchange the visual perception distortion data through the vision control module after finishing the visual perception distortion data acquisition of seeing the first pattern with the first eye and seeing the second pattern with the second eye, so that the user can continue to acquire the visual perception distortion data under the condition that the user sees the first pattern with the second eye and sees the second pattern with the first eye.
A method of using a system for collecting perceptual distortion data as described above, comprising a pattern dragging step;
the pattern dragging step includes:
the user wears the vision separating device, so that one eye of the user sees the pattern displayed by the first display module through the vision separating device, and the other eye of the user sees the pattern displayed by the second display module through the vision separating device;
after the user wears the visual device, the user aims at a specified point of the first pattern, drags the second pattern, so that the user sees the specified point of the second pattern to be overlapped with the specified point of the first pattern, and inputs a confirmation command.
After the user wears the visual separation device, the first pattern and the background pattern displayed by the first display module are seen by one eye of the visual separation device, the second pattern displayed by the second display module is seen by the other eye of the visual separation device, under the condition of binocular visual separation, the second pattern is dragged by aligning the designated point of one first pattern until the user thinks that the designated point of the second pattern is coincident with the center of the first pattern, and then a confirmation command is input. After the user inputs a confirmation command, the coordinate obtaining module obtains the coordinates of the designated point of the second pattern after receiving the confirmation command input by the user, and the perception eye position offset degree and/or the offset direction of the user can be obtained according to the displacement deviation and/or the angle deviation between the obtained coordinates of the designated point of the second pattern and the corresponding coordinates of the designated point of the first pattern.
Further, the use method of the visual perception distortion data acquisition system further comprises the display changing step of:
the display changing step includes: and inputting a display control command, controlling the size and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the first pattern and/or the second pattern to change, and/or controlling the binocular disparity parameter and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the background pattern to change, and executing the pattern dragging step again.
When the user feeds back that the first pattern and/or the second pattern cannot be seen, the display parameters can be adjusted by the display control module until the user feeds back that the first pattern and/or the second pattern can be seen, so that the visual perception distortion data of the user can be collected continuously according to the condition that the user sees the first pattern and/or the second pattern. The acquired visual perception distortion data can more accurately reflect the control capability of the brain center on the eyeball position and the analysis capability on binocular visual signals by adjusting the display parameters for a plurality of times and repeating the pattern dragging steps for a plurality of times.
Compared with the prior art, the invention has the beneficial effects that:
(1) the method comprises the steps that a user can align and drag patterns under the condition of binocular vision by the first display module and the second display module, the user is assisted to watch the first pattern by the background pattern, the visual perception distortion state of the user can be quantified by a coordinate positioning method, and therefore the acquisition of visual perception distortion data of the user can be completed from two dimensions of different space and time, the data can reflect the visual perception distortion condition under the condition of binocular vision, and therefore the control capability of a brain center on eyeball movement and the analysis capability of the brain center on binocular vision signals are detected;
(2) for the amblyopia patient, secondary classification can be carried out through the acquisition of vision sensation and perception distortion data, and the making and effect evaluation of a more targeted and personalized treatment scheme can be guided by combining the clinical indexes such as vision and the like;
(4) for a patient with strabismus, diagnosis and operation time judgment of a doctor can be assisted from the brain central layer through visual perception distortion data acquisition;
(5) visual perception distortion data can be recorded through a computer, self-adaptive display parameter adjustment can be carried out according to the condition of a person to be collected, and iteration and optimization of a system and a method based on mining and analysis of big data in the later period are facilitated;
(6) the method has the advantages of low requirements on collectors and collected persons, simplicity, convenience, rapidness, high accuracy, no need of large-scale equipment, environmental friendliness, saving, low cost, repeated use and suitability for large-scale crowds.
Drawings
Fig. 1 is a block diagram of a perceptual distortion data acquisition system according to an embodiment of the present invention.
Fig. 2 is a diagram illustrating an effect of a perceptual distortion data acquisition system according to an embodiment of the present invention.
Fig. 3 is a diagram illustrating an effect of a perceptual distortion data acquisition system according to another embodiment of the present invention.
Fig. 4 is a block diagram of a perceptual distortion data acquisition system in accordance with an embodiment of the present invention.
Fig. 5 is a block diagram of a perceptual distortion data acquisition system in accordance with an embodiment of the present invention.
FIG. 6 is a diagram illustrating perceptual distortion data collection results, in accordance with an embodiment of the present invention.
Fig. 7 is a flow chart of a method for using a perceptual distortion data system in accordance with an embodiment of the present invention.
Detailed Description
The drawings are only for purposes of illustration and are not to be construed as limiting the invention. For a better understanding of the following embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
In one embodiment, as shown in fig. 1, there is provided a perceptual-distortion data acquisition system comprising:
a first display module 10, configured to display a background pattern and a plurality of first patterns, where the plurality of first patterns are in a circumferential array;
a second display module 20 for displaying a second pattern, wherein the second pattern can move along with the dragging of the user;
a coordinate obtaining module 30, configured to obtain coordinates of a designated point of the second pattern according to a confirmation command input by a user;
all the patterns displayed by the first display module 10 can be seen by one eye through the viewing device, and all the patterns displayed by the second display module 20 can be seen by the other eye through the viewing device.
The user can see the first pattern and the background pattern displayed on the first display module 10 with one eye and the second pattern displayed on the second display module 20 with the other eye through the vision-dividing means, under the condition of binocular vision, the user drags the second pattern once for each first pattern, so that the appointed point of the second pattern after dragging is respectively coincided with the appointed point of each first pattern, each time when the user thinks that the appointed points are coincided, a confirmation command is input, and the coordinate acquiring module 30 acquires the coordinates of the designated point of the second pattern according to the confirmation command input by the user, whereby the coordinates of the designated point of the second pattern can be obtained a plurality of times, according to the displacement deviation and/or the angle deviation between the appointed point coordinate of the second pattern and the appointed point coordinate of each corresponding first pattern, the perception eye position deviation degree and/or the deviation direction of the user can be obtained.
Under the condition of binocular vision, a user drags the second pattern aiming at the first pattern in the circumferential array, namely aiming at the first patterns positioned at different directions, so that the appointed points of the second pattern are respectively superposed with the appointed points of the first patterns positioned at different directions, the dragging of the second pattern aiming at the first pattern positioned at the right of the eyes by the user can achieve basic superposition, but the dragging of the second pattern aiming at the first pattern positioned at the left of the eyes by the user can generate larger deviation, and therefore the obtained appointed point coordinates of the second pattern can reflect the perception eye position deviation degree and/or the deviation direction of the user on the space.
The user drags the second pattern aiming at the plurality of first patterns under the condition of binocular vision, the user drags the second pattern for the 1 st time to enable the designated point of the second pattern to coincide with the designated point of the 1 st first pattern, and the user drags the second pattern for the last 1 times to enable the designated point of the second pattern to coincide with the designated point of the last 1 first patterns, a certain time is needed, the 1 st dragging of the second pattern can achieve basic coincidence, but the last 1 dragging of the second pattern can generate larger deviation, so the obtained designated point coordinate of the second pattern can reflect the perception eye position deviation degree and/or the perception eye position deviation direction of the user in time.
In the embodiment, the perceptual eye position offset degree is quantified in the form of the positioning coordinate, and the displacement deviation and/or the angle deviation obtained by multiple times of calculation are integrated by dragging the second pattern for multiple times and acquiring the central coordinate of the second pattern for multiple times by the user, so that the visual perception distortion data of the user in time and space can be more accurately acquired.
The background pattern and the first pattern may be seen together by the same eye of the user through the viewing apparatus. Under the condition of binocular vision, when the user has a defect in the eye, the user can hardly see the first pattern through the eye, and at the moment, the user can be assisted by the background pattern to more easily see the first pattern, so that the acquisition of the visual perception distortion data is continued.
The background pattern may be a non-binocular parallax pattern, and as shown in fig. 2, the background pattern is a display effect diagram of a non-parallax mesh pattern, the first pattern is a circle, and the second pattern is a cross.
The background pattern may also be a pattern with binocular parallax, and as shown in fig. 3, the background pattern is a display effect diagram of the pattern with binocular parallax, the first pattern is circular, and the second pattern is cross-shaped.
Preferably, the display range of the first pattern is located within the display range of the background pattern. The background pattern covers the plurality of first patterns in a full circumferential array, thereby further assisting the user in viewing the first patterns through the viewing device.
When the visual perception distortion data of the two eyes of the user is collected, the first pattern is used as a reference and is distinguished relative to the second pattern, so that the analysis condition of the brain center of the user on the visual signals of the two eyes is detected. Thus, it is understood that the first pattern may be any shape, such as a polygon, a petal shape, etc., and the second pattern may be any shape, such as a zigzag shape, a butterfly shape, etc. It is understood that, in order to distinguish the first pattern from the second pattern, it may be distinguished by a shape, and the shapes of the first pattern and the second pattern may be different and colors may be the same; the patterns can also be distinguished by colors, and the shapes of the first pattern and the second pattern can be the same, but the colors are different; the first pattern and the second pattern may be different in shape and color.
It is understood that the shapes of the first pattern and the second pattern may be not limited, and therefore, a certain point on the second pattern may be specified according to the shape of the second pattern, and then a certain point on the first pattern may be specified according to the shape of the first pattern, so that the user drags the second pattern, so that the specified point on the second pattern coincides with the specified point on the first pattern, the coordinate obtaining module 30 obtains the coordinates of the specified point on the second pattern at this time, and calculates the displacement deviation and/or the angle deviation between the obtained coordinates of the point and the coordinates of the specified point on the first pattern, thereby obtaining the perceptual distortion data of the user.
Preferably, the designated point coordinates may be point coordinates, center coordinates, or the like, which facilitate alignment when the user drags the second pattern.
Take the first pattern as a square and the second pattern as a cross as an example. Embodiments thereof include, but are not limited to, the following:
(1) under the condition of binocular vision, a user can align to the center of the square, drag the cross shape to enable the center of the cross shape to coincide with the center of the square, and input a confirmation command when the user thinks that the centers of the cross shape and the square are coincident; the coordinate obtaining module 30 obtains the center coordinate of the cross shape at this time according to the confirmation command input by the user, so that the displacement deviation and/or the angle deviation between the center coordinate of the cross shape and the center coordinate of the square shape can be calculated to obtain the visual perception distortion data of the user.
(2) Under the condition of binocular vision, a user can aim at a certain specified angular point of the square, drag the cross to ensure that the center of the cross is superposed with the angular point of the square, and input a confirmation command when the user thinks that the angular points are superposed; the coordinate obtaining module 30 obtains the center coordinates of the cross shape at this time according to the confirmation command input by the user, so that the displacement deviation and/or the angle deviation between the center coordinates of the cross shape and the corner point coordinates of the square can be calculated to obtain the visual perception distortion data of the user.
(3) Under the condition of binocular vision, a user can align to the center of a square, drag the cross shape to enable a certain specified end point of the cross shape to coincide with the center of the square, and input a confirmation command when the user thinks that the end point of the cross shape coincides with the center of the square; the coordinate obtaining module 30 obtains the coordinates of the end point of the cross shape at this time according to the confirmation command input by the user, so that the displacement deviation and/or the angle deviation between the coordinates of the end point of the cross shape and the coordinates of the center of the square shape can be calculated to obtain the visual perception distortion data of the user.
It can be understood that, in order to achieve the condition of binocular vision, that is, the pattern a displayed by the first display module 10 can be seen by one eye through the vision device, and the pattern B displayed by the second display module 20 can be seen by the other eye through the vision device, including but not limited to the following implementation manners: (1) the first display module 10 and the second display module 20 alternately display the pattern a and the pattern B, respectively, the viewing device adopts shutter glasses, and the alternately displayed pattern a and pattern B can be seen by different eyes through the shutter glasses; (2) the first display module 10 and the second display module 20 respectively display the pattern a and the pattern B by using light of two different colors, and the viewing device respectively uses glasses with two lenses of the two different colors (for example, the first display module 10 and the second display module 20 respectively display the pattern a and the pattern B by using red and blue, and the glasses are red and blue glasses), and the pattern a and the pattern B can be respectively seen by different eyes through the glasses; (3) the first display module 10 and the second display module 20 respectively display the pattern a and the pattern B by using light with different polarization directions, and the viewing device adopts polarized glasses, so that the pattern a and the pattern B can be seen by different eyes through the polarized glasses; (4) the first display module 10 and the second display module 20 respectively display the pattern a and the pattern B by using different screens, and the vision separation device adopts VR glasses, so that the pattern a and the pattern B can be seen by different eyes respectively through the VR glasses.
In one embodiment, the plurality of first patterns are in n circumferential arrays, n being an integer greater than 1.
Aiming at the n first patterns of the circumferential arrays, the user drags the second pattern for multiple times, and inputs a confirmation command when the appointed point of the second pattern is considered to be overlapped with the appointed point of the first pattern each time, and the coordinate acquisition module 30 acquires the central coordinate of the second pattern for multiple times according to the confirmation command input by the user, so that the visual perception distortion degree of the user can be acquired more accurately.
In an embodiment, the first display module 10 is specifically configured to initially display one first pattern, and after the coordinate acquisition module 30 acquires the coordinates of the second pattern once, display the next first pattern, so that the coordinate acquisition module 30 acquires the coordinates of the next second pattern until all the first patterns are displayed.
The specific implementation process can be as follows: the first display module 10 displays the 1 st first pattern, so that the user drags the 1 st second pattern, a certain specified point of the dragged second pattern is aligned with a certain specified point of the 1 st first pattern displayed by the first display module 10, and when the user thinks that the first pattern is aligned, a confirmation command is input; the coordinate obtaining module 30 obtains the coordinates of the designated point of the second pattern at this time according to the confirmation command input by the user; the first display module 10 displays the 2 nd first pattern, so that the user drags the 2 nd second pattern, a certain specified point of the dragged second pattern is aligned with a certain specified point of the 2 nd first pattern displayed by the first display module 10, and when the user thinks that the first pattern is aligned, a confirmation command is input; the coordinate obtaining module 30 obtains the coordinates of the designated point of the second pattern at this time according to the confirmation command input by the user; and so on until the first display module 10 displays all the first patterns, so that the coordinate acquiring module 30 may acquire the designated point coordinates of the second pattern a plurality of times, may calculate a displacement deviation and/or an angle deviation between the acquired point coordinates a plurality of times and the point coordinates designated on the corresponding first pattern, and evaluate the degree of perceptual eye displacement deviation and/or the direction of deviation of the user by integrating the calculated displacement deviations and/or angle deviations a plurality of times.
In another embodiment, the first display module 10 is specifically configured to initially display one circumferential array of the second pattern, and after the coordinate acquiring module 30 acquires the coordinates of the second pattern once, display a next circumferential array of the first pattern, so that the coordinate acquiring module 30 acquires the coordinates of the second pattern next time until all the circumferential arrays of the second pattern are displayed.
The specific implementation process can be as follows: the first display module 10 displays the first pattern of the 1 st circumferential array, and for the first pattern of the 1 st circumferential array, the user drags the second pattern for multiple times, so that the designated point of the second pattern coincides with the designated point of the different first pattern of the 1 st circumferential array each time, and when the coincidence is considered each time, a confirmation command is input; the coordinate obtaining module 30 obtains the coordinates of the designated points of the second pattern corresponding to the 1 st circumferential array according to the confirmation command input by the user; the first display module 10 displays the first pattern of the 2 nd circumferential array, and for the first pattern of the 2 nd circumferential array, lets the user drag the second pattern multiple times, so that the designated point of the second pattern coincides with the designated point of the different first pattern of the 2 nd circumferential array each time, and when considering that the designated points coincide each time, inputs a confirmation command; the coordinate obtaining module 30 completes obtaining of coordinates of the second pattern designated point corresponding to the 2 nd circumferential array according to a confirmation command input by the user; and so on, until the first display module 10 finishes displaying the first patterns of all the circular arrays, the coordinate obtaining module 30 may obtain the coordinates of the designated points corresponding to the second patterns of the plurality of circular arrays, calculate the coordinate displacement deviations and/or the angle deviations corresponding to the plurality of circular arrays, and evaluate the perceptual eye deviation degree and/or the deviation direction of the user by integrating the displacement deviations and/or the angle deviations calculated for a plurality of times.
As shown in fig. 4, in one embodiment, the perceptual-distortion data acquisition system further comprises:
a track generating module 40, configured to perform a connection according to the coordinates acquired by the coordinate acquiring module 30 to generate a distorted track map, perform a connection according to the coordinates of the specified point of the second pattern displayed by the first displaying module 10 to generate a reference track map, and display the distorted track map and the reference track map.
According to the distorted track diagram and the reference track diagram, the visual perception distortion degree and the deviation direction of the user can be conveniently and intuitively seen.
As shown in fig. 5, in one embodiment, the perceptual-distortion data acquisition system further comprises:
a vector generating module 50, configured to calculate a deviation vector according to the coordinates acquired by the coordinate acquiring module 30 and the coordinates of the designated point of the second pattern displayed by the first display module 10, generate a deviation vector diagram according to the deviation vector, and display the deviation vector diagram.
According to the deviation vector diagram, the visual perception distortion degree and the deviation direction of the user can be conveniently and intuitively seen.
Fig. 6 is a diagram showing data acquisition results of a preferred embodiment, which respectively shows displacement deviations and angle deviations corresponding to 3 circular arrays (small circle, middle circle, large circle), and also shows total displacement deviations and total angle deviations calculated by integrating deviation data corresponding to 3 circular arrays. The left lower part also shows a distorted track diagram and a reference track diagram corresponding to the 3 circumferential arrays; the lower right hand side also shows the offset vector map and reference trajectory map for the 3 circumferential arrays.
As shown in fig. 4 and 5, in an embodiment, the perceptual-distortion data acquisition system further includes:
a display control module 61, configured to control, according to a display control command input by a user, a size and/or a flicker frequency and/or a spatial frequency and/or a dither frequency and/or a contrast of the first pattern displayed by the first display module 10, and/or a size and/or a flicker frequency and/or a spatial frequency and/or a dither frequency and/or a contrast of the first pattern displayed by the first display module 10, and/or a binocular disparity parameter and/or a flicker frequency and/or a spatial frequency and/or a dither frequency and/or a contrast of the background pattern displayed by the third display module.
When the user feeds back that the first pattern and/or the second pattern cannot be seen, the size and/or the flicker frequency and/or the spatial frequency and/or the dithering frequency and/or the contrast of the first pattern and/or the second pattern may be adjusted by the display control module 61 until the user feeds back that the first pattern and/or the second pattern can be seen, whereby the perceptual eye data of the user may continue to be acquired according to the situation that the user sees the first pattern and/or the second pattern.
When the visual perception distortion data is collected, the collected perception eye position data can be repeatedly collected for a plurality of times by adjusting the size and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the first pattern and/or the second pattern, so that the collected perception eye position data can more accurately reflect the control capability of the brain center on the eyeball position and the analysis capability on binocular visual signals.
In one embodiment, the perceptual-distortion data acquisition system further comprises:
and a vision-dividing control module 62 for controlling the first pattern to be seen by the designated eye through the vision-dividing means, the second pattern to be seen by the other eye through the vision-dividing means, or controlling the second pattern to be seen by the designated eye through the vision-dividing means, the first pattern to be seen by the other eye through the vision-dividing means, according to the designated eye input by the user.
When the user has a binocular imbalance, the user should look at the pattern displayed on the first display module 10 with the inferior eye, look at the pattern displayed on the second display module 20 with the superior eye, and control the first pattern to be seen by the user's inferior eye and the second pattern to be seen by the user's superior eye and the parallax control module 62. When the user is in binocular balance, after the user finishes the acquisition of the visual perception and perception distortion data by using the first eye to see the first pattern and using the second eye to see the second pattern, the data can be exchanged through the vision separating control module 62, so that the user can continue to acquire the visual perception and perception distortion data under the condition that the user uses the second eye to see the first pattern and uses the first eye to see the second pattern.
In a specific implementation process, the specific eye input by the user can be a dominant eye or a disadvantaged eye. If the designated eye input by the user is set as a disadvantaged eye and the left eye of a certain user is known as a disadvantaged eye, the left eye is input, and the split view control module 62 controls the pattern displayed by the first display module 10 to be seen by the left eye of the user and controls the pattern displayed by the second display module 20 to be seen by the right eye of the user according to the input designated eye as the left eye. If the designated eye input by the user is set as the dominant eye and the left eye of a certain user is known as the dominant eye, and the right eye is input, the split view control module 62 controls the pattern displayed by the first display module 10 to be seen by the left eye of the user and controls the pattern displayed by the second display module 20 to be seen by the right eye of the user according to the input designated eye as the right eye.
In one embodiment, there is also provided a method of using the above-mentioned perceptual distortion data acquisition system, comprising a pattern dragging step;
the pattern dragging step includes:
the user wears a vision-separating device so that one eye sees the pattern displayed by the first display module 10 through the vision-separating device and the other eye sees the pattern displayed by the second display module 20 through the vision-separating device;
after the user wears the visual device, the user aims at a specified point of the first pattern, drags the second pattern, so that the user sees the specified point of the second pattern to be overlapped with the specified point of the first pattern, and inputs a confirmation command.
After the user wears the visual separation device, the user views the first pattern and the background pattern displayed on the first display module 10 with one eye and views the second pattern displayed on the second display module 20 with the other eye, and under such binocular visual separation conditions, drags the second pattern in alignment with the designated point of one first pattern until the user considers that the designated point of the second pattern has been coincident with the center of the first pattern, and then inputs a confirmation command. After the user inputs the confirmation command, the coordinate obtaining module 30 obtains the coordinates of the designated point of the second pattern at this time after receiving the confirmation command input by the user, and obtains the degree and/or direction of the perceptual eye position offset of the user in time and space according to the displacement deviation and/or the angle deviation between the obtained coordinates of the designated point of the second pattern and the corresponding coordinates of the designated point of the first pattern.
The designated point of the first pattern may be a corner point, an end point, a center point, etc. of the first pattern that facilitates alignment when the user drags the second pattern. Similarly, the designated point of the second pattern may be a corner point, an end point, a central point, etc. of the second pattern, which is beneficial for the user to align when dragging the second pattern.
In one embodiment, the method of using a perceptual-distortion data acquisition system further comprises a display alteration step of:
the display changing step includes: and inputting a display control command, controlling the size and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the first pattern and/or the second pattern to change, and/or controlling the binocular disparity parameter and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the background pattern to change, and executing the pattern dragging step again.
When the user feeds back that the first pattern and/or the second pattern cannot be seen, the size and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the first pattern and/or the second pattern can be adjusted by the display control module 61, and/or the binocular disparity parameter and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the background pattern can be adjusted until the user feeds back that the first pattern and/or the second pattern can be seen, so that the visual perception distortion data of the user can be collected continuously according to the situation that the user sees the first pattern and/or the second pattern.
When the perception eye position data is collected, the size and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the first pattern and/or the second pattern can be adjusted for multiple times, and the pattern dragging step can be repeated for multiple times, so that the collected perception distortion data can more accurately reflect the control capability of the brain center on the eyeball position and the analysis capability on binocular vision signals.
It is to be understood that the pattern dragging step should be performed by the collector of the perceptual eye position data, and the display changing step may be performed by the collector of the perceptual eye position data and may also be performed by the collector of the perceptual eye position data. Both the acquirer and the acquirer may be understood as a user of the perceptual eye examination system.
Fig. 7 is a flow chart of a method of using a perceptual distortion data acquisition system in accordance with a preferred embodiment. The use method of the visual perception distortion data acquisition system comprises the following steps:
s11, a user wears a vision separating device;
s12, respectively aligning the center of each first pattern, dragging the second pattern to enable a user to see that the center of the second pattern is respectively superposed with the center of each first pattern, and inputting a confirmation command;
s13, controlling the size and/or the flicker frequency and/or the spatial frequency and/or the dithering frequency and/or the contrast of the first pattern and/or the second pattern to change, and executing the step S12 again.
It should be understood that the above-mentioned embodiments of the present invention are only examples for clearly illustrating the technical solutions of the present invention, and are not intended to limit the specific embodiments of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention claims should be included in the protection scope of the present invention claims.
Claims (10)
1. A system for perceptual distortion data acquisition, comprising:
the first display module is used for displaying a background pattern and a plurality of first patterns, and the first patterns are in a circumferential array;
the second display module is used for displaying a second pattern, and the second pattern can move along with the dragging of a user;
the coordinate acquisition module is used for acquiring the coordinates of the appointed point of the second pattern according to a confirmation command input by a user;
all patterns displayed by the first display module can be seen by one eye through the visual separation device, and all patterns displayed by the second display module can be seen by the other eye through the visual separation device.
2. A system as claimed in claim 1, wherein the first display module is specifically configured to initially display one of the first patterns, and after the coordinate obtaining module obtains the coordinates of the second pattern once, display the next first pattern, so that the coordinate obtaining module obtains the coordinates of the second pattern next time until all the first patterns are displayed.
3. A perceptual distortion data acquisition system as defined in claim 1, wherein the plurality of first patterns are in n circumferential arrays, n being an integer greater than 1.
4. A system according to claim 4, wherein the first display module is specifically configured to initially display one circumferential array of the second pattern, and after the coordinate obtaining module obtains the coordinates of the second pattern once, display a next circumferential array of the first pattern, so that the coordinate obtaining module obtains the coordinates of the second pattern next time until all circumferential arrays of the second pattern are displayed.
5. A perceptual distortion data acquisition system as defined in claim 1, further comprising:
and the track generation module is used for connecting lines according to the coordinates acquired by the coordinate acquisition module to generate a distorted track map, connecting lines according to the coordinates of the appointed points of the second pattern displayed by the first display module to generate a reference track map, and displaying the distorted track map and the reference track map.
6. A perceptual distortion data acquisition system as defined in claim 1, further comprising:
and the vector generating module is used for calculating a deviation vector according to the coordinates acquired by the coordinate acquiring module and the coordinates of the appointed point of the second pattern displayed by the first display module, generating a deviation vector diagram according to the deviation vector and displaying the deviation vector diagram.
7. A perceptual distortion data acquisition system as defined in claim 1, further comprising:
the display control module is used for controlling the size and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the first pattern displayed by the first display module according to a display control command input by a user, and/or controlling the size and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the first pattern displayed by the first display module, and/or controlling the binocular disparity parameter and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the background pattern displayed by the third display module.
8. A perceptual distortion data acquisition system as defined in claim 1, further comprising:
and the visual separation control module is used for controlling the first pattern to be seen by the appointed eye through the visual separation device and the second pattern to be seen by the other eye through the visual separation device or controlling the second pattern to be seen by the appointed eye through the visual separation device and the first pattern to be seen by the other eye through the visual separation device according to the appointed eye input by the user.
9. Use of a system for perceptual distortion data acquisition as defined in any one of claims 1 to 8, comprising a pattern dragging step;
the pattern dragging step includes:
the user wears the vision separating device, so that one eye of the user sees the pattern displayed by the first display module through the vision separating device, and the other eye of the user sees the pattern displayed by the second display module through the vision separating device;
after the user wears the visual device, the user aims at a specified point of the first pattern, drags the second pattern, so that the user sees the specified point of the second pattern to be overlapped with the specified point of the first pattern, and inputs a confirmation command.
10. A method for using a perceptual distortion data collection system as defined in claim 9, further comprising a display alteration step of:
the display changing step includes: and inputting a display control command, controlling the size and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the first pattern and/or the second pattern to change, and/or controlling the binocular disparity parameter and/or the flicker frequency and/or the spatial frequency and/or the jitter frequency and/or the contrast of the background pattern to change, and executing the pattern dragging step again.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910871636.6A CN110897607A (en) | 2019-09-16 | 2019-09-16 | Visual perception distortion data acquisition system and use method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910871636.6A CN110897607A (en) | 2019-09-16 | 2019-09-16 | Visual perception distortion data acquisition system and use method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110897607A true CN110897607A (en) | 2020-03-24 |
Family
ID=69814617
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910871636.6A Pending CN110897607A (en) | 2019-09-16 | 2019-09-16 | Visual perception distortion data acquisition system and use method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110897607A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113367649A (en) * | 2021-06-29 | 2021-09-10 | 广州市诺以德医疗科技发展有限公司 | Binocular balance data acquisition system and use method thereof |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110051090A1 (en) * | 2009-08-31 | 2011-03-03 | Canon Kabushiki Kaisha | Ophthalmologic photographing apparatus |
CN102813500A (en) * | 2012-08-07 | 2012-12-12 | 北京嘉铖视欣数字医疗技术有限公司 | Perception correcting and training system on basis of binocular integration |
CN105816150A (en) * | 2016-01-28 | 2016-08-03 | 孙汉军 | Detecting and training system for binocular fusion function |
CN109431444A (en) * | 2018-12-12 | 2019-03-08 | 广州视景医疗软件有限公司 | Eye position deviation check method and eye position deviation topographic map check system |
-
2019
- 2019-09-16 CN CN201910871636.6A patent/CN110897607A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110051090A1 (en) * | 2009-08-31 | 2011-03-03 | Canon Kabushiki Kaisha | Ophthalmologic photographing apparatus |
CN102813500A (en) * | 2012-08-07 | 2012-12-12 | 北京嘉铖视欣数字医疗技术有限公司 | Perception correcting and training system on basis of binocular integration |
CN105816150A (en) * | 2016-01-28 | 2016-08-03 | 孙汉军 | Detecting and training system for binocular fusion function |
CN109431444A (en) * | 2018-12-12 | 2019-03-08 | 广州视景医疗软件有限公司 | Eye position deviation check method and eye position deviation topographic map check system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113367649A (en) * | 2021-06-29 | 2021-09-10 | 广州市诺以德医疗科技发展有限公司 | Binocular balance data acquisition system and use method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Rolland et al. | Towards quantifying depth and size perception in virtual environments | |
CN102905609B (en) | Visual function testing device | |
CN105916432B (en) | The method shown for optotype | |
EP2749207B1 (en) | Eyeglasses-wearing simulation method, program, device, eyeglass lens-ordering system and eyeglass lens manufacturing method | |
WO2016054860A1 (en) | Head wearing type vision auxiliary system for patient with vision disorder | |
CN108371538B (en) | Human eye vision monitoring system and method | |
WO2016086742A1 (en) | Microlens array based near-eye display (ned) | |
KR20170125818A (en) | System and method for measuring ocular motility | |
CN107300776A (en) | Interpupillary distance depth of field method to set up and device based on image scale | |
CN108324239B (en) | Portable intelligent optometry instrument | |
JP5937235B2 (en) | Prism prescription value acquisition system, acquisition method, acquisition device, and program for correcting fixation disparity | |
EP3189514A1 (en) | Altered vision via streamed optical remapping | |
US11730357B2 (en) | Method and system for measuring or assessing human visual field | |
CN110897607A (en) | Visual perception distortion data acquisition system and use method thereof | |
JP2017191546A (en) | Medical use head-mounted display, program of medical use head-mounted display, and control method of medical use head-mounted display | |
CN110881946A (en) | Perception eye position data acquisition system and use method thereof | |
CN108259888A (en) | The test method and system of stereo display effect | |
CN108616736A (en) | Method for tracking and positioning and device for stereoscopic display | |
EP3292811B1 (en) | Device for screening convergence insufficiency and related methods | |
Law et al. | Dichoptic viewing methods for binocular rivalry research: prospects for large-scale clinical and genetic studies | |
CN111436901B (en) | Object image unequal measurement method based on multiple control point modes | |
US9131838B1 (en) | System for clinical examination of visual functions using lenticular optics or programmable displays | |
JPH08266465A (en) | Simulation apparatus for eye optical system | |
CN113367649B (en) | Binocular balance data acquisition system and use method thereof | |
CN116725473B (en) | Dynamic stereoscopic vision detection device and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200324 |