CN111202663B - Vision training learning system based on VR technique - Google Patents

Vision training learning system based on VR technique Download PDF

Info

Publication number
CN111202663B
CN111202663B CN201911411287.6A CN201911411287A CN111202663B CN 111202663 B CN111202663 B CN 111202663B CN 201911411287 A CN201911411287 A CN 201911411287A CN 111202663 B CN111202663 B CN 111202663B
Authority
CN
China
Prior art keywords
training
data
module
display screen
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911411287.6A
Other languages
Chinese (zh)
Other versions
CN111202663A (en
Inventor
郑雅羽
林斯霞
寇喜超
朱威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201911411287.6A priority Critical patent/CN111202663B/en
Publication of CN111202663A publication Critical patent/CN111202663A/en
Application granted granted Critical
Publication of CN111202663B publication Critical patent/CN111202663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H5/00Exercisers for the eyes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip

Abstract

The invention relates to a vision training learning system based on VR technology, which is matched with VR glasses and a VR handle setting system, and comprises an eyeball tracking module, a training module and a data module; VR glasses look two eyes separately, select the training module to train after logging in the system, and eyeball tracking module begins to track eyeball motion orbit, and training data is recorded simultaneously to the training module, sends eyeball motion orbit data and training data to the data module in real time, and the data module controls the training degree of difficulty of training module through the analysis to data, and the module cooperation carries out visual training to the user. The invention overcomes the defect of visual impairment of the dominant eye and the psychological adverse effect caused by covering the dominant eye and inhibiting the dominant eye; the patient is not disturbed during training and is more attentive to training; the method comprises the steps of overcoming the problem that a password is manually input to log in a system, and generating a special database; the user can perform visual training while learning, and the problem that learning is not involved in the visual training process in the prior art is solved.

Description

Vision training learning system based on VR technique
Technical Field
The invention relates to the technical field of electric digital data processing, in particular to a visual training learning system based on VR technology, which can help amblyopia patients to solve learning difficulty.
Background
The amblyopia is classified as the amblyopia of people who have no obvious organic lesions in the eyes, have far vision less than or equal to 0.8 and cannot be corrected, and mainly have functional factors. Among 100 children, 2 to 3 amblyopia patients exist, and amblyopia causes serious harm to life, study, work, psychology and the like of the patients, and has more profound influence on children.
The existing technology aims to improve the vision of amblyopia eyes of a patient and recover the binocular vision function of the amblyopia eyes of the patient, wherein the binocular vision function comprises eyeball movement control capacity, simultaneous vision, fusion vision, stereoscopic vision and the like; if the binocular vision function is in problem, the patient can not control the self eyeball movement, such as following, glancing and watching, the visual memory is in problem, the coordination of hands and eyes is difficult, and the patient can not have the three-level vision function of normal people. Amblyopia treatment is a long process, patients are mainly children and are in a knowledge intake stage, and amblyopia influences learning and learning labored, and further causes psychological influence due to poor learning performance.
The prior art provides some solutions to this, but still does not incorporate patient learning aids into amblyopia treatment.
The patent publication CN108478401A, "amblyopia training and rehabilitation system and method based on VR technology" is to play images through VR glasses, suppress the good eyes by covering or atomizing the left and right screens, and then train the eyeballs of the amblyopia eyes by playing the set film of image motion mode; although the training of amblyopia utilizes VR technique to separate eyes, the training method is single and does not relate to learning aspect.
The patent with publication number CN1579319 (CN 100348147C) "diagnosis and treatment instrument for infantile amblyopia and strabismus" is to use multimedia technology to realize the integration of diagnosis, treatment and medical record, and the treatment function comprises CAM, red light flicker, fine eyesight training, saccade movement, following movement, fusion vision and stereovision training; the training method of the patent considers improving the single-eye vision of the amblyopia of the patient and recovering the double-eye vision function of the patient, but does not relate to VR technology and has no relevant learning content in the training process.
Existing visual training basically divides the eyes through 3D glasses or red and blue glasses, and VR is not really applied to amblyopia treatment.
Disclosure of Invention
The invention solves the problems in the prior art and provides an optimized vision training learning system based on VR technology.
The technical scheme adopted by the invention is that the vision training learning system based on the VR technology is matched with VR glasses and a VR handle;
the system comprises:
the eyeball tracking module is used for logging in a user and continuously tracking the movement track of the eyeballs;
the training module is used for outputting visual training to the user and acquiring training data fed back by the user;
and the data module is used for storing data required by the system, acquiring information transmitted by the eyeball tracking module, analyzing and acquiring training data of the training module and setting output content of the training module.
Preferably, the eye tracking module comprises:
the eyeball identification unit is used for capturing eyeballs, acquiring identification data and submitting the identification data to the data module for matching when a user wears VR glasses and watches a display screen;
and the eyeball tracking unit is used for tracking the eyeball motion track after the user passes the identification and sending the eyeball motion track to the data module.
Preferably, the training module comprises:
the monocular vision enhancement unit is used for presenting preset data on the left display screen or the right display screen and carrying out visual training on a monocular, and comprises training content output and feedback information acquisition;
and the binocular vision training unit is used for presenting preset data on the left display screen and the right display screen and performing vision training on two eyes.
Preferably, the monocular vision enhancing unit includes a visual stimulation module, a saccade module, and a follow-up module.
Preferably, the data module comprises:
the data storage unit is used for downloading and storing the electronic version textbook and the extracurricular book of the corresponding user from the cloud server;
the data setting unit is used for displaying the data stored by the data storage module on a display screen of the VR glasses in a preset mode; the display contents of the left display screen and the right display screen of the VR eyes are different;
and the data analysis unit is used for analyzing the data of the eyeball tracking module and the training module in real time so as to adaptively control the training difficulty and generate a region of interest map.
Preferably, the data storage unit is further connected with a collection unit for a user to operate the VR glasses to enter a live-action mode, obtain pictures through a camera and a handle of the VR glasses, and store the pictures in the data storage module.
Preferably, the acquisition unit acquires a picture and then performs a pre-processing, wherein the pre-processing comprises the following steps:
step A.1: taking a background image of the current picture, and making a difference with the current picture, and reserving a foreground image;
step A.2: and carrying out image correction on the foreground image, wherein the image correction comprises inclination correction, shadow removal and exposure adjustment.
Preferably, if monocular vision enhancement training is performed, the data setting unit sets a left display screen or a right display screen corresponding to the amblyopia of the user to display training contents, the right display screen or the left display screen displays a background, and all training operations are completed by the amblyopia; if the user selects training data from the data storage unit, the scaling ratio of the electronic version textbook and the extracurricular book displayed by the left display screen or the right display screen is set according to the vision of the user, and the right display screen or the left display screen continuously displays the background;
if the binocular vision enhancement training is carried out, the setting of the data setting unit comprises the following steps:
step B.1: identifying the graph parts in all document images in the data storage unit by using a trained two-classification neural network model for distinguishing characters and graphs in the pictures, recording the coordinate points of the upper left corner and the lower right corner of each graph by taking the upper left corner of the picture as an origin, storing the coordinate values into the output address of the data setting unit, and outputting the coordinate values as a left display screen and a right display screen;
step B.2: copying all document images in the data storage unit, and covering and storing the graph part corresponding to the step B.1 in the copied document images by using white pixel blocks;
step B.3: the image with the graph part covered by the white pixel block is horizontally divided, the image is divided according to lines by utilizing a horizontal projection blank gap caused by the blank gap between character lines, the upper left corner coordinate and the lower right corner coordinate of each line are recorded by taking the upper left corner of the image as the origin, and the coordinate values of all odd lines and even lines are respectively stored into the output address of the data setting module and are output as a left display screen and a right display screen.
Preferably, when performing binocular vision enhancement training, the training module acquires the configuration of the data setting unit, and the training includes the following steps:
step C.1: when the user opens the learning material, reading the coordinate values of the step B.1 and the step B.3 from the data setting unit through the address value;
step C.2: b.3, displaying the copied document image content corresponding to the odd row coordinate value on a left display screen or a right display screen of VR glasses according to a preset scaling, and displaying the copied document image content corresponding to the even row coordinate value on the left display screen or the right display screen of the VR glasses according to the preset scaling;
step C.3: and B.1, the original document image content corresponding to the graphic document coordinate value identified in the step B.1 is arranged beside the copy document image content corresponding to the odd line coordinate value or the even line coordinate value of the left display screen and the right display screen in a deviation way, and the deviation distance is greater than 0.
Preferably, the analysis method of the data analysis unit comprises data analysis fed back by the training module, data analysis fed back by the eyeball tracking module and generation of a region-of-interest map;
the data analysis of the training module feedback comprises the following steps:
step D.1.1: recording the time of each training click, and increasing the rotating speed of the training module if the accuracy of the current click is increased and the reaction time is shortened compared with the last training; repeating the step D.1.1 until the training module reaches the maximum rotating speed, and carrying out the next step;
step D.1.2: if the user can still normally operate at the highest rotating speed, increasing the spatial frequency of the CAM bar under the condition of keeping the highest rotating speed;
step D.1.3: if the user can not normally operate after increasing the spatial frequency, keeping the spatial frequency of the current CAM bar grid, reducing the rotating speed until the rotating speed value suitable for the user is found, and returning to the step D.1.1;
in the data analysis fed back by the eyeball tracking module, the eyeball tracking module sends eyeball motion tracks to the data analysis unit in real time, the data analysis unit compares the received eyeball motion track data with a preset motion track, and if the deviation distance between the eyeball motion track of the user and the preset motion track is found to be larger than a threshold value, a light spot is flickered on a display screen of VR eyes to remind the user;
the generating of the region of interest map comprises the following steps:
step D.2.1: the training module sends the eyeball motion track recorded by the eyeball tracking module to the data analysis unit;
step D.2.2: the data analysis unit calculates the area with the highest repetition rate of the eyeball movement track data in the display screen according to the eyeball movement track data;
step D.2.3: and taking the current area as the area of interest of the user.
The invention provides an optimized vision training learning system based on VR technology, which integrates an eyeball tracking module for logging in a user and continuously tracking the movement track of eyeballs, a training module for outputting vision training to the user and acquiring training data fed back by the user, and a data module for storing data required by the system, acquiring information transmitted by the eyeball tracking module, analyzing, acquiring the training data of the training module and setting the output content of the training module by matching VR glasses and a system arranged on a VR handle; look the eyes separately through VR glasses to eyeball tracking module login system selects training module to train, and eyeball tracking module begins to track eyeball movement track, and training module takes notes training data simultaneously, sends eyeball movement track data and training data to data module in real time, and data module controls training module's training difficulty degree through the analysis to data, carries out the visual training to the user through the cooperation of above module.
The invention divides the eyes by VR technique, overcomes the defect of eyesight damage of the dominant eye caused by covering the dominant eye and inhibiting the dominant eye in the traditional method, and can overcome the psychological adverse effect caused by adopting eye shield when covering the patient; by utilizing the immersion of the VR technology and the characteristic that the earphone carried by the VR technology has the noise reduction function, a patient can only see the content of the VR glasses display screen during training without being interfered by other users, so that the patient can train more attentively; the system is logged in through the eyeball tracking module, the complexity that the system is logged in by manually inputting a password by oneself is overcome, and a database belonging to the system is generated; through the cooperation between eyeball tracking module, data module and the training module, make the user carry out the visual training at the in-process of study simultaneously, solved among the prior art not relate to the problem of study at the visual training in-process.
Drawings
FIG. 1 is a block diagram of the system of the present invention;
FIG. 2 is a schematic diagram of an example configuration of the system of the present invention;
in fig. 1 and 2, arrows indicate the direction of data transmission.
Detailed Description
The present invention is described in further detail with reference to the following examples, but the scope of the present invention is not limited thereto.
The invention relates to a vision training learning system based on VR technology, which is matched with VR glasses and a VR handle;
the eyeball tracking module is used for logging in a user and continuously tracking the movement track of the eyeballs;
the training module is used for outputting visual training to the user and acquiring training data fed back by the user;
and the data module is used for storing data required by the system, acquiring information transmitted by the eyeball tracking module, analyzing and acquiring training data of the training module and setting the output content of the training module.
In the invention, the VR all-in-one machine is the most convenient VR equipment and is convenient for users to use, a powerful chip and an ultra-high definition screen are arranged in the VR all-in-one machine, the limitation of external equipment is eliminated, the VR all-in-one machine can be independently used, and a 5G technology is adopted for bidirectional network transmission, so that the time delay is reduced and the communication capacity is increased.
According to the invention, the VR integrated machine comprises VR glasses and a VR handle, a left display screen, a right display screen, an infrared LED lamp, an eyeball tracking module, a data module and a training module are arranged in the VR glasses, a front camera and an earphone are arranged outside the VR glasses, the eyeball tracking module is connected with the data module and the training module, the front camera is connected with the data module, the earphone is arranged outside the VR glasses in order to reduce interference of external noise on training, and the shortcoming that the eyeball identification technology is restricted to a certain extent when used in a dark environment is overcome; an infrared LED lamp is arranged beside the eyeball tracking module and used for overcoming the defect that the eyeball identification technology is restricted to a certain extent when used in a dark environment.
According to the invention, a VR handle of the VR all-in-one machine can face a simulated laser pen to be in a 3D direction and is controlled through a key, a confirmation key and a switching key are arranged on the handle, a volume key is arranged on the side of the handle and is used for adjusting the volume, and the switching key is used for switching the handle into a writable laser pen or a non-writable laser pen; when the user handle is switched into the laser pen capable of being handwritten, the laser pen can be used for marking and calculating the image.
In the invention, after the VR all-in-one machine is started, two display modes, namely a live-action mode and a virtual mode, are available, the virtual mode is entered by default, the training module is entered when the virtual mode is entered, and a user selects the corresponding control to enter the live-action mode by using the handle, and can take a picture by using the confirmation key of the handle and store the picture in the data module.
The system comprises an eyeball tracking module, a training module and a data module, wherein the eyeball tracking module is used for logging in a user and continuously tracking an eyeball motion track, the training module is used for outputting visual training to the user and acquiring training data fed back by the user, and the data module is used for storing data required by the system, acquiring information transmitted by the eyeball tracking module, analyzing and acquiring the training data of the training module and setting the output content of the training module; the eyes are looked separately through VR glasses, the eyeball tracking module logs in a system, the training module is selected to train, the eyeball tracking module starts tracking the eyeball movement track, the training module records training data simultaneously, the eyeball movement track data and the training data are sent to the data module in real time, the data module controls the training difficulty degree of the training module through the analysis of the data, and visual training is carried out on a user through the matching of the above modules.
The eye tracking module comprises:
the eyeball identification unit is used for capturing eyeballs, acquiring identification data and submitting the identification data to the data module for matching when a user wears VR glasses and watches a display screen;
and the eyeball tracking unit is used for tracking the eyeball motion track after the user passes the identification and sending the eyeball motion track to the data module.
In the invention, each training only needs to take VR glasses and watch the front display screen, the eyeball identification unit captures the eyeballs, the eyeballs can be logged in by matching with the existing information of the data module, and then the eyeball tracking unit tracks the eyeballs.
In the invention, in the initialization process, personal information of a user, such as a computer or a mobile phone, is required to be input at a client side, such as a computer or a mobile phone, wherein the personal information comprises a name, a region, a cultural degree and left and right eye eyesight, the image of the user is collected, and the eyesight condition of the personal information is required to be updated after the user inspects the eyesight each time.
The training module comprises:
the monocular vision enhancement unit is used for presenting preset data on the left display screen or the right display screen and carrying out visual training on a monocular, and comprises training content output and feedback information acquisition;
the monocular vision enhancing unit includes a visual stimulation module, a saccade module, and a follow-up module.
And the binocular vision training unit is used for presenting preset data on the left display screen and the right display screen and performing vision training on two eyes.
The data module includes:
the data storage unit is used for downloading and storing the electronic version textbook and the out-of-class book of the corresponding user from the cloud server;
the data storage unit is further connected with a collection unit and used for enabling a user to operate the VR glasses to enter a live-action mode, obtain pictures through a camera and a handle of the VR glasses and store the pictures to the data storage module.
The image acquisition unit acquires an image and then carries out pretreatment, and the pretreatment comprises the following steps:
step A.1: taking a background image of the current picture, and making a difference with the current picture, and reserving a foreground image;
step A.2: and carrying out image correction on the foreground image, wherein the image correction comprises inclination correction, shadow removal and exposure adjustment.
The data setting unit is used for displaying the data stored by the data storage module on a display screen of the VR glasses in a preset mode; the display contents of the left display screen and the right display screen of the VR eyes are different;
and the data analysis unit is used for analyzing the data of the eyeball tracking module and the training module in real time so as to adaptively control the training difficulty and generate a region of interest map.
In the invention, a data storage module stores electronic version textbooks and extraclass books of all departments of all levels of all regions, generally, the storage mode is pictures, and the total data is stored in a database of a server because the memory is large; the data storage module of VR glasses only stores the corresponding electronic version textbook and the corresponding class book of minority of user's region, cultural degree, when needs update content, by VR end from the high in the clouds download can.
In the invention, the acquisition unit is mainly applied to an exercise module of the data storage module, a user can select a corresponding control to enable the VR glasses to enter a live-action mode, a front camera of the VR glasses displays a video shot in real time on a display screen of the VR glasses, the user can place the homework of the user on the same day or other things which are wanted to be displayed on the VR glasses on the camera, the angle of the camera is adjusted to be a proper angle, a confirmation key of a handle is pressed, and a current frame of the video is captured and stored into an exercise module in the data storage module.
If the monocular vision enhancement training is carried out, the data setting unit sets a left display screen or a right display screen corresponding to the amblyopic eyes of the user to display training contents, the right display screen or the left display screen displays a background, and all training operations are finished by the amblyopic eyes; if the user selects training data from the data storage unit, the scaling ratio of the electronic version textbook and the out-of-class book displayed on the left display screen or the right display screen is set according to the user vision, and the right display screen or the left display screen continuously displays the background;
if the binocular vision enhancement training is performed, the setting of the data setting unit comprises the following steps:
step B.1: identifying the graph parts in all document images in the data storage unit by using a trained two-classification neural network model for distinguishing characters and graphs in the pictures, recording the coordinate points of the upper left corner and the lower right corner of each graph by taking the upper left corner of the picture as an origin, storing the coordinate values into the output address of the data setting unit, and outputting the coordinate values as a left display screen and a right display screen;
step B.2: copying all document images in the data storage unit, and covering and storing the graph part corresponding to the step B.1 in the copied document images by using white pixel blocks;
step B.3: the image with the graph part covered by the white pixel block is horizontally divided, the image is divided according to lines by utilizing a horizontal projection blank gap caused by the blank gap between character lines, the upper left corner coordinate and the lower right corner coordinate of each line are recorded by taking the upper left corner of the image as an original point, and the coordinate values of all odd lines and even lines are respectively stored into the output address of the data setting module and are output as a left display screen and a right display screen.
When carrying out binocular vision enhancement training, the configuration of data setting unit is obtained to the training module, and the training includes the following steps:
step C.1: when the user opens the learning material, reading the coordinate values of the step B.1 and the step B.3 from the data setting unit through the address value;
step C.2: b.3, displaying the copied document image content corresponding to the odd line coordinate value on a left display screen or a right display screen of VR glasses according to a preset scaling, and displaying the copied document image content corresponding to the even line coordinate value on the left display screen or the right display screen of the VR glasses according to the preset scaling;
step C.3: and B.1, the original document image content corresponding to the graphic document coordinate value identified in the step B.1 is arranged beside the copy document image content corresponding to the odd line coordinate value or the even line coordinate value of the left display screen and the right display screen in a deviation way, and the deviation distance is greater than 0.
In the invention, a monocular vision enhancement module is selected, training is started, training data is recorded, an eyeball tracking module tracks eyeball movement tracks, the training data and the eyeball movement track data are sent to a data analysis module in real time, and the training difficulty of the monocular vision enhancement module is controlled through a data analysis result; the binocular vision training module is selected, a user enters the data storage module to select learning materials, the materials can be displayed on the left display screen and the right display screen in a setting mode in the data setting module, and the complete learning materials can be seen only by the left eye and the right eye; meanwhile, the eyeball tracking module sends the eyeball motion track to the data analysis module. Through the cooperation of each module, the monocular vision reinforcing module through the training module trains the amblyopia, treats that after certain promotion of amblyopia's vision, gets into binocular vision training module, trains binocular vision at the in-process of study on the limit.
In the invention, for monocular vision enhancement training, the display screen corresponding to the amblyopia eye displays detailed training content, the display screen corresponding to the amblyopia eye can only see the background, and all training operations are completed by the amblyopia eye; if the user enters the data storage module to select the learning materials of the user, the zoom ratio of the document image displayed on the display screen corresponding to the amblyopia eye is set according to the eyesight of the user, and the display screen corresponding to the excellent-looking eye still displays the background. Monocular vision enhancement training is based on the MFBF (monocular fine training in binocular vision) principle, i.e. non-amblyopic eyes see peripheral views rather than details, while amblyopic eyes see central/detailed views after training; in MFBF training, the dominant eye is only temporarily inhibited in the center, the peripheral corresponding region is still working, and the amblyopic eye vision is improved while the simultaneous vision ability is enhanced.
In the present invention, based on the visual stimulation module, the saccade module, and the following module in the monocular vision enhancing unit, the following examples are given.
In the invention, the visual stimulation module is arranged as follows:
if the training is carried out for the first time, the rotating speed of the CAM and the bar space frequency are adjusted to the lowest level; if not, recording the training by the recorded rotating speed and the bar grating spatial frequency after the last training is finished;
the user is allowed to adapt to the rotating bar graph for one minute before each training, N red and/or blue and/or green points randomly appear on the rotating bar graph after one minute, the user points to the point by using a VR handle and can click by pressing a confirmation key of the handle, and six times are a cycle, namely, one cycle is one cycle when the user clicks all the points;
the user is required to click the point which is flickering, if the point is clicked, the point is changed into another red and/or blue and/or green, and after all the points are clicked, the display screen is refreshed to generate random N points;
red and/or blue and/or green dots will appear 6 times in one cycle, the color will change and the color change will not reappear;
and recording the time of each point clicking of the user, and transmitting the time to a data module for data analysis.
In the invention, the CAM is a raster therapy principle, also called a visual biostimulation method, black and white raster plates with different frequencies are used as stimulation sources, and due to the rotation of the raster plates, the amblyopia eyes are stimulated by the raster plates with different spatial frequencies and contrasts in all directions, so that most of cortical cell receptors involved in the amblyopia eyes are stimulated exactly, and the aim of improving the eyesight is fulfilled. Wherein, the bar grating selects black and white with high contrast sensitivity to generate a bar grating graph with alternate black and white, and the bar grating graph has N different spatial frequencies (N > 1); the rotation speed is based on medical principle, the rotation speed of the bar grating is 1-6 circles per minute, the screen refreshes 3-18 frames per second, and the rotation speed is less than 2 degrees per frame.
In the invention, the minimum CAM rotating speed is correspondingly set to be 3 frames per second, the maximum rotating speed is set to be 18 frames per second, and the rotating speed is uniformly changed according to M grades; the faster the rotation speed, the better the treatment effect.
In the invention, the glance module is set as follows:
n random red and/or blue and/or green points appear on the rotating bar graph, the frequency of each point appearing on the display screen is more than or equal to 1, the user can be reminded of clicking the point with which color in the display screen, if the point with the reminded color is clicked, the point disappears, and if the point is not clicked or the color of the clicked point is not the required color, the point can flash and an error prompting sound effect can remind the user.
In the invention, the following module is set as follows:
and a red or blue or green point appears on the blank background picture, the color of the point appears randomly and in a cycle of three colors, and the point moves in a movement track set by a system and requires the eyes of a user to track the moving point.
In the present invention, eye movements are divided into three types: following, saccadic, gazing; follow means that the eye smoothly tracks the moving object; saccades refer to the jumping of eye fixations from one place to another; fixation refers to the eye's fixed focus at one point. In the flicker stimulation, when a flickering red, blue and green point is clicked, the point needs to be watched, and the point can be clicked, so that the watching ability of eyeball movement is exercised; in the above saccade training, the user needs to saccade the points on the bar chart to know which point is clicked and which point is not clicked, the points randomly appear in each place on the bar chart, and the user needs to see each place up, down, left and right, which also trains the peripheral perception; in the follow-up training, the movement of the user tracking point is required to exercise the follow-up ability of the eyeball, and when the user cannot follow the movement of the small ball, the point flickers to stimulate the pyramidal cells of the eye.
In the invention, no matter far-distance or near-distance targets are watched, the intersection points of the visual lines of the targets are always intersected on the targets, the image is formed in the fovea maculata, the cone cells of the macula part mainly capture colors, the cone cells comprise three cone cells, namely a blue cone, a green cone and a red cone, and the cone cells are respectively most sensitive to blue light, green light and red light. The scintillation can reach the purpose that visual excitation and conduction efficiency promoted, through the click to the red dot of scintillation, blue point, green point, after scintillation red light, blue light, green light carry out a stimulation to the amblyopia eye, macular portion cone cell can produce thermal effect and biochemical effect etc. improves local blood circulation and metabolism, reinforcing cone cell vigor to make eyesight improve.
In the invention, the time of the user for respectively clicking the red point, the blue point and the green point is recorded in order to observe whether the user is insensitive to which color and whether the color vision is abnormal or not through data, and the clicking time is recorded for many times so as to reduce the contingency of the data. If the recorded data shows that the time difference of clicking three colors by the user is large, the user is informed in time and is advised to go to a hospital for examination.
In the invention, generally, the scaling of the left display screen and the scaling of the right display screen in the step C.2 are consistent.
In the invention, in step c.3, the offset distance is not too large, and needs to be adjusted accordingly according to the scaling.
In the invention, a user can see the whole picture only by watching two glasses at the same time, the coordination ability of the two eyes of the user is exercised, the graph is shifted, the fusion ability of the user is exercised, the fusion not only combines two object images, but also has the ability of reflectively ensuring that the two object images are combined into a perception impression under the condition that the two object images deviate from the normal position, and the displacement amplitude of the retina object image which can cause the fusion reflection is called as the fusion range.
The analysis method of the data analysis unit comprises the steps of analyzing data fed back by the training module, analyzing data fed back by the eyeball tracking module and generating a region-of-interest map;
the data analysis of the training module feedback comprises the following steps:
step D.1.1: recording the time of each training click, and increasing the rotating speed of the training module if the accuracy of the current click is increased and the reaction time is shortened compared with the last training; repeating the step D.1.1 until the training module reaches the maximum rotating speed, and carrying out the next step;
step D.1.2: if the user can still normally operate at the highest rotating speed, increasing the spatial frequency of the CAM bar under the condition of keeping the highest rotating speed;
step D.1.3: if the user can not normally operate after increasing the spatial frequency, keeping the spatial frequency of the current CAM bar grid, reducing the rotating speed until the rotating speed value suitable for the user is found, and returning to the step D.1.1;
in the data analysis fed back by the eyeball tracking module, the eyeball tracking module sends the eyeball motion track to the data analysis unit in real time, the data analysis unit compares the received eyeball motion track data with a preset motion track, and if the deviation distance between the eyeball motion track of the user and the preset motion track is found to be larger than a threshold value, a light spot is flickered on a display screen of VR eyes to remind the user;
the generating of the region of interest map comprises the following steps:
step D.2.1: the training module sends the eyeball motion track recorded by the eyeball tracking module to the data analysis unit;
step D.2.2: the data analysis unit calculates an area with the highest repetition rate of the eyeball motion track data in the display screen according to the eyeball motion track data;
step D.2.3: and taking the current area as the area of interest of the user.
In the invention, in the control of the spatial frequency and the rotating speed of the CAM bar grating, the time of training a click point each time is recorded, and if the accuracy of the click point is higher and the time is shorter and shorter, the rotating speed is increased by one gear; if the patient can still adapt to the maximum rotating speed, increasing the spatial frequency to the first gear at the maximum rotating speed; if the patient can not adapt to the spatial frequency increased at the highest rotating speed, the current spatial frequency is kept, the rotating speed is reduced, and a value suitable for the patient is found.
In the invention, if the trained patient selects the practice module to practice to make questions, the repeated rate of the eyeball motion trail can be used as reference for the difficulty level of the questions.
According to the invention, through a system matched with VR glasses and a VR handle, an eyeball tracking module used for logging in a user and continuously tracking an eyeball movement track, a training module used for outputting visual training to the user and obtaining training data fed back by the user, and a data module used for storing data required by the system, obtaining information transmitted by the eyeball tracking module, analyzing and obtaining the training data of the training module and setting output contents of the training module are integrated in the system; look the eyes separately through VR glasses to eyeball tracking module login system selects training module to train, and eyeball tracking module begins to track eyeball movement track, and training module takes notes training data simultaneously, sends eyeball movement track data and training data to data module in real time, and data module controls training module's training difficulty degree through the analysis to data, carries out the visual training to the user through the cooperation of above module.
The invention divides the vision of both eyes by VR technology, overcomes the defect of eyesight damage of the dominant eye caused by covering the dominant eye and inhibiting the dominant eye in the traditional method, and can overcome the psychological adverse effect caused by adopting an eye shield when covering a patient; by utilizing the immersion of the VR technology and the characteristic that the earphone carried by the VR technology has the noise reduction function, a patient can only see the content of the VR glasses display screen during training without being interfered by other users, so that the patient can train more attentively; the system is logged in through the eyeball tracking module, the complexity that the system is logged in by manually inputting a password by oneself is overcome, and a database belonging to the system is generated; through the cooperation between eyeball tracking module, data module and the training module, make the user carry out the visual training at the in-process of study simultaneously, solved among the prior art and did not relate to the problem of study in the visual training process.

Claims (7)

1. The utility model provides a vision training learning system based on VR technique which characterized in that: the system is matched with VR glasses and a VR handle;
the system comprises:
the eyeball tracking module is used for logging in a user and continuously tracking the movement track of eyeballs;
the training module is used for outputting visual training to the user and acquiring training data fed back by the user; the training module comprises:
the monocular vision enhancement unit is used for presenting preset data on the left display screen or the right display screen and performing visual training on a monocular, and comprises training content output and feedback information acquisition; the monocular vision enhancing unit comprises a visual stimulation module, a saccade module and a following module;
the binocular vision training unit is used for presenting preset data on the left display screen and the right display screen and performing vision training on two eyes;
the data module is used for storing data required by the system, acquiring information transmitted by the eyeball tracking module, analyzing and acquiring training data of the training module and setting output content of the training module; the data module comprises a data storage unit, a data setting unit and a data analysis unit;
selecting a monocular vision enhancement unit, starting training and recording training data, tracking an eyeball movement track by an eyeball tracking module, sending the training data and the eyeball movement track data to a data analysis unit in real time, and controlling the training difficulty of the monocular vision enhancement unit according to a data analysis result; the binocular vision training unit is selected, a user enters the data storage unit to select learning materials, the materials can be displayed on the left display screen and the right display screen in a setting mode in the data setting unit, and the complete learning materials can be seen only by the left eye and the right eye; meanwhile, the eyeball tracking module sends the eyeball motion track to the data analysis unit; training amblyopia eyes by a monocular vision enhancement unit of the training module, entering a binocular vision training unit after the vision of the amblyopia eyes is improved, and training binocular vision in the learning process;
if the binocular vision enhancement training is performed, the setting of the data setting unit comprises the following steps:
step B.1: identifying the graph parts in all document images in the data storage unit by using a trained two-classification neural network model for distinguishing characters and graphs in the pictures, recording the coordinate points of the upper left corner and the lower right corner of each graph by taking the upper left corner of the picture as an origin, storing the coordinate values into the output address of the data setting unit, and outputting the coordinate values as a left display screen and a right display screen;
step B.2: copying all document images in the data storage unit, and covering and storing the graph part corresponding to the step B.1 in the copied document images by using white pixel blocks;
step B.3: horizontally dividing the image of which the graph part is covered by the white pixel block, dividing the image by lines by utilizing a horizontal projection blank gap caused by the blank gap between character lines, recording the coordinates of the upper left corner and the lower right corner of each line by taking the upper left corner of the image as an original point, respectively storing the coordinate values of all odd lines and even lines into the output address of the data setting unit, and outputting the coordinate values as a left display screen and a right display screen;
when carrying out binocular vision enhancement training, the configuration of data setting unit is obtained to the training module, and the training includes the following steps:
step C.1: when the user opens the learning material, reading the coordinate values of the step B.1 and the step B.3 from the data setting unit through the address value;
step C.2: b.3, displaying the copy document image content corresponding to the odd row coordinate value on one display screen of the VR glasses according to a preset scaling, and displaying the copy document image content corresponding to the even row coordinate value on the other display screen of the VR glasses according to a preset scaling;
step C.3: and B.1, the original document image content corresponding to the graphic document coordinate value identified in the step B.1 is arranged beside the copied document image content of the left display screen and the right display screen in an offset manner, and the offset distance is greater than 0.
2. The VR technology based vision training learning system of claim 1, wherein: the eye tracking module comprises:
the eyeball identification unit is used for capturing eyeballs, acquiring identification data and submitting the identification data to the data module for matching when a user wears VR glasses and watches a display screen;
and the eyeball tracking unit is used for tracking the eyeball motion track after the user passes the identification and sending the eyeball motion track to the data module.
3. The VR technology based vision training learning system of claim 1, wherein: the data module comprises:
the data storage unit is used for downloading and storing the electronic version textbook and the out-of-class book of the corresponding user from the cloud server;
the data setting unit is used for displaying the data stored by the data storage unit on a display screen of the VR glasses in a preset mode; the display contents of the left display screen and the right display screen of the VR glasses are different;
and the data analysis unit is used for analyzing the data of the eyeball tracking module and the training module in real time so as to adaptively control the training difficulty and generate the region of interest map.
4. The VR technology based vision training learning system of claim 3, wherein: the data storage unit is further connected with a collecting unit, and the collecting unit is used for enabling a user to operate the VR glasses to enter a live-action mode, obtain pictures through a camera and a handle of the VR glasses and store the pictures to the data storage unit.
5. The VR technology based vision training learning system of claim 4, wherein: the image acquisition unit acquires an image and then carries out pretreatment, and the pretreatment comprises the following steps:
step A.1: taking a background image of the current picture, making a difference with the current picture, and reserving a foreground image;
step A.2: and carrying out image correction on the foreground image, wherein the image correction comprises inclination correction, shadow removal and exposure adjustment.
6. The VR technology based vision training learning system of claim 1, wherein:
if the monocular vision enhancement training is carried out, the data setting unit sets a display screen corresponding to the amblyopia of the user to display training contents, the other display screen displays a background, and all training operations are finished by the amblyopia; if the user selects the training data from the data storage unit, the scaling ratio of the electronic version textbook and the extra-class book displayed by the display screen corresponding to the amblyopia eye is set according to the vision of the user, and the other display screen continuously displays the background.
7. The VR technology based vision training learning system of claim 1, wherein: the analysis method of the data analysis unit comprises the steps of analyzing data fed back by the training module, analyzing data fed back by the eyeball tracking module and generating a region-of-interest map;
the data analysis of the training module feedback comprises the following steps:
step D.1.1: recording the time of each training click, and increasing the rotation speed of the training module if the accuracy of the current click is increased and the reaction time is shortened compared with the previous training; repeating the step D.1.1 until the training module reaches the maximum rotating speed, and carrying out the next step;
step D.1.2: if the user can still normally operate at the highest rotating speed, increasing the spatial frequency of the CAM bar under the condition of keeping the highest rotating speed;
step D.1.3: if the user can not normally operate after increasing the spatial frequency, keeping the spatial frequency of the current CAM bar grid, reducing the rotating speed until the rotating speed value suitable for the user is found, and returning to the step D.1.1;
in the data analysis fed back by the eyeball tracking module, the eyeball tracking module sends eyeball motion tracks to the data analysis unit in real time, the data analysis unit compares the received eyeball motion track data with a preset motion track, and if the deviation distance between the eyeball motion track of the user and the preset motion track is found to be larger than a threshold value, a light spot is flickered on a display screen of VR eyes to remind the user;
the generating of the region of interest map comprises the following steps:
step D.2.1: the training module sends the eyeball motion track recorded by the eyeball tracking module to the data analysis unit;
step D.2.2: the data analysis unit calculates an area with the highest repetition rate of the eyeball motion track data in the display screen according to the eyeball motion track data;
step D.2.3: and taking the current area as the area of interest of the user.
CN201911411287.6A 2019-12-31 2019-12-31 Vision training learning system based on VR technique Active CN111202663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911411287.6A CN111202663B (en) 2019-12-31 2019-12-31 Vision training learning system based on VR technique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911411287.6A CN111202663B (en) 2019-12-31 2019-12-31 Vision training learning system based on VR technique

Publications (2)

Publication Number Publication Date
CN111202663A CN111202663A (en) 2020-05-29
CN111202663B true CN111202663B (en) 2022-12-27

Family

ID=70783332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911411287.6A Active CN111202663B (en) 2019-12-31 2019-12-31 Vision training learning system based on VR technique

Country Status (1)

Country Link
CN (1) CN111202663B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022044067A1 (en) * 2020-08-24 2022-03-03 三菱電機ビルテクノサービス株式会社 Document image recognition system
CN112022642B (en) * 2020-09-16 2023-01-10 杭州集视智能科技有限公司 Edge visual field training equipment and training method based on visual field center damage
CN112120905A (en) * 2020-09-24 2020-12-25 杰雯 Eye movement tracking system and binocular vision-based stereoscopic vision training device
CN112617736B (en) * 2020-12-07 2022-04-26 深圳市眼科医院 AR binocular visual function assessment and training device
CN112569093B (en) * 2020-12-23 2023-03-21 杭州天炫文化传播有限公司 Eyesight improvement training device
CN113101158A (en) * 2021-04-08 2021-07-13 杭州深睿博联科技有限公司 VR-based binocular video fusion training method and device
CN113096763A (en) * 2021-04-08 2021-07-09 杭州深睿博联科技有限公司 Multi-terminal asynchronous remote visual training method and system
CN112966983B (en) * 2021-04-12 2021-09-21 广东视明科技发展有限公司 Visual function processing timeliness capability evaluation system and method based on VR space
CN113408798B (en) * 2021-06-14 2022-03-29 华中师范大学 Barrier-free VR teaching resource color optimization method for people with abnormal color vision
CN113674832A (en) * 2021-08-23 2021-11-19 福建港呗网络科技有限公司 Vision correction system and method
CN113827238A (en) * 2021-09-02 2021-12-24 苏州中科先进技术研究院有限公司 Emotion evaluation method and device based on virtual reality and eye movement information
CN113805704A (en) * 2021-09-26 2021-12-17 广东国瞳智能医疗技术发展有限公司 Vision treatment method and system based on VR technology
CN116807849A (en) * 2023-06-20 2023-09-29 广州视景医疗软件有限公司 Visual training method and device based on eye movement tracking
CN116708974B (en) * 2023-08-01 2023-10-17 清华大学 Universal camera interference method and system for head-mounted active vision camera

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107071384A (en) * 2017-04-01 2017-08-18 上海讯陌通讯技术有限公司 The binocular rendering intent and system of virtual active disparity computation compensation
CN109124869A (en) * 2018-03-02 2019-01-04 潘学龙 One kind overturning Conventional visual and is accustomed to and carries out brain eye coordinate motion image training system
CN109521871A (en) * 2018-10-22 2019-03-26 广州视景医疗软件有限公司 A kind of training method of fusion function, device, equipment and storage medium

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2595341Y (en) * 2002-12-03 2003-12-31 程康 Sight function traning atlas for two eyes
CN201987843U (en) * 2010-11-22 2011-09-28 杭州华泰医疗科技有限公司 Virtual eye guard
CN204446276U (en) * 2015-01-08 2015-07-08 陈美琴 Dual-channel type amblyopia therapeutic equipment
CN205649486U (en) * 2016-01-28 2016-10-19 孙汉军 Eyes fuse detection training system of function
CN105748268B (en) * 2016-02-18 2018-04-06 杭州睩客科技有限公司 For treating the 3D rendering system of disease of eye
CN205726125U (en) * 2016-03-30 2016-11-23 重庆邮电大学 A kind of novel robot Long-Range Surveillance System
CN106406509B (en) * 2016-05-16 2023-08-01 上海青研科技有限公司 Head-mounted eye-control virtual reality equipment
CN108421252B (en) * 2017-02-14 2023-12-29 杭州融梦智能科技有限公司 Game realization method based on AR equipment and AR equipment
CN108733202A (en) * 2017-04-18 2018-11-02 北京传送科技有限公司 A kind of data compression method and its device based on eyeball tracking
CN108721070A (en) * 2017-04-24 2018-11-02 河北工业大学 A kind of intelligent vision functional training system and its training method based on eyeball tracking
CN107105333A (en) * 2017-04-26 2017-08-29 电子科技大学 A kind of VR net casts exchange method and device based on Eye Tracking Technique
CN209004408U (en) * 2017-11-22 2019-06-21 湖北医达医疗器械有限公司 A kind of following strabismus surgery instrument
CN108234986B (en) * 2018-01-19 2019-03-15 姚惜珺 For treating the 3D rendering management method and management system and device of myopia or amblyopia
CN108478399B (en) * 2018-02-01 2020-07-24 上海青研科技有限公司 Amblyopia training instrument
CN108205203A (en) * 2018-02-02 2018-06-26 刘程 A kind of e-book VR glasses
CN108830943B (en) * 2018-06-29 2022-05-31 歌尔光学科技有限公司 Image processing method and virtual reality equipment
CN109662873B (en) * 2018-12-12 2021-10-15 广州视景医疗软件有限公司 VR-based eyeball movement training method and system
CN110292515A (en) * 2019-07-31 2019-10-01 北京浩瞳科技有限公司 A kind of method and system of visual function training
CN110433062B (en) * 2019-08-14 2021-09-17 沈阳倍优科技有限公司 Visual function training system based on dynamic video images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107071384A (en) * 2017-04-01 2017-08-18 上海讯陌通讯技术有限公司 The binocular rendering intent and system of virtual active disparity computation compensation
CN109124869A (en) * 2018-03-02 2019-01-04 潘学龙 One kind overturning Conventional visual and is accustomed to and carries out brain eye coordinate motion image training system
CN109521871A (en) * 2018-10-22 2019-03-26 广州视景医疗软件有限公司 A kind of training method of fusion function, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111202663A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
CN111202663B (en) Vision training learning system based on VR technique
CN108427503B (en) Human eye tracking method and human eye tracking device
Patterson et al. Human factors of stereoscopic 3D displays
CN104603673B (en) Head-mounted system and the method for being calculated using head-mounted system and rendering digital image stream
Einhäuser et al. Human eye-head co-ordination in natural exploration
CN106484116B (en) The treating method and apparatus of media file
CN205903239U (en) Visual acuity test and trainer based on virtual reality
CN104306102A (en) Head wearing type vision auxiliary system for patients with vision disorder
CN105992965A (en) Stereoscopic display responsive to focal-point shift
JP4421903B2 (en) Eye training device
US10376439B2 (en) Audio-feedback computerized system and method for operator-controlled eye exercise
CN107260506B (en) 3D vision training system, intelligent terminal and head-mounted device based on eye movement
US20070146631A1 (en) System and method for analysis and visualization of metamorphopsia through three dimensional scene regeneration and testing of vision thereby
CN107028738B (en) Vision-training system, intelligent terminal and helmet based on eye movement
CN107307981B (en) Control method of head-mounted display device
CN105943327B (en) Vision exercise health care system with anti-dizziness device
CN107065198B (en) Wear the vision optimization method of display equipment
CN107291233B (en) Wear visual optimization system, intelligent terminal and head-mounted device of 3D display device
CN107137211A (en) The 3D vision training methods moved based on eye
Qiu et al. Motion parallax improves object recognition in the presence of clutter in simulated prosthetic vision
CN109363901A (en) Utilize the visual auxesis method and system of image technique and cloud control technology
CN207666574U (en) Automatic equipment for examining vision
Boyle et al. Challenges in digital imaging for artificial human vision
CN105616118A (en) Virtual reality glasses and optical system for preventing and correcting myopia
CN107300771B (en) Wear the vision optimization method of 3D display equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant