CN111202663A - Vision training learning system based on VR technique - Google Patents

Vision training learning system based on VR technique Download PDF

Info

Publication number
CN111202663A
CN111202663A CN201911411287.6A CN201911411287A CN111202663A CN 111202663 A CN111202663 A CN 111202663A CN 201911411287 A CN201911411287 A CN 201911411287A CN 111202663 A CN111202663 A CN 111202663A
Authority
CN
China
Prior art keywords
training
data
module
display screen
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911411287.6A
Other languages
Chinese (zh)
Inventor
郑雅羽
林斯霞
寇喜超
朱威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201911411287.6A priority Critical patent/CN111202663A/en
Publication of CN111202663A publication Critical patent/CN111202663A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H5/00Exercisers for the eyes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip

Abstract

The invention relates to a vision training learning system based on VR technology, which is matched with VR glasses and a VR handle setting system, and comprises an eyeball tracking module, a training module and a data module; VR glasses look two eyes separately, select the training module to train after logging in the system, and eyeball tracking module begins to track eyeball motion orbit, and training data is recorded simultaneously to the training module, sends eyeball motion orbit data and training data to the data module in real time, and the data module controls the training degree of difficulty of training module through the analysis to data, and the module cooperation carries out visual training to the user. The invention overcomes the defect of eyesight damage of the dominant eye and the psychological adverse effect caused by covering the dominant eye and inhibiting the dominant eye; the patient is not disturbed during training and is more attentive to training; the method comprises the steps of overcoming the problem that a password is input by hand to log in a system, and generating a special database; the user can carry out visual training while learning, and the problem that learning is not involved in the visual training process in the prior art is solved.

Description

Vision training learning system based on VR technique
Technical Field
The invention relates to the technical field of electric digital data processing, in particular to a visual training learning system based on VR technology, which can help amblyopia patients to solve learning difficulty.
Background
In 1996, 4 months, the Chinese ophthalmologic society's national conference for amblyopia and strabismus prevention and treatment groups of children passed the definition of amblyopia, and all people who have no obvious organic lesions in the eyes, have far vision less than or equal to 0.8 and cannot be corrected and mainly have functional factors are classified as amblyopia. The incidence of amblyopia in China is about 2.8 percent, namely 2 to 3 amblyopia patients exist in 100 children, and the amblyopia causes serious harm to the life, study, work, psychology and the like of the patients, and has more profound influence on the children.
The existing technology aims to improve the vision of amblyopia eyes of a patient and recover the binocular vision function of the amblyopia eyes of the patient, wherein the binocular vision function comprises eyeball movement control capacity, simultaneous vision, fusion vision, stereoscopic vision and the like; if the binocular vision function is in problem, the patient can not control the self eyeball movement, such as following, glancing and watching, the visual memory is in problem, the coordination of hands and eyes is difficult, and the patient can not have the three-level vision function of normal people. Amblyopia treatment is a long process, patients are mainly children and are in a knowledge intake stage, and amblyopia influences learning and learning labored, and further causes psychological influence due to poor learning performance.
The prior art provides some solutions to this, but still does not incorporate patient learning aids into amblyopia treatment.
The patent with publication number CN108478401A, "amblyopia training and rehabilitation system and method based on VR technology" plays images through VR glasses, suppresses the good-looking eyes by covering or atomizing the left and right screens, and trains the eyeballs of the amblyopia eyes by playing the set film of image motion mode; although the training of amblyopia utilizes VR technique to separate eyes, the training method is single and does not relate to learning aspect.
The patent with publication number CN1579319 (CN100348147C) "diagnosis and treatment instrument for infantile amblyopia and strabismus" is to use multimedia technology to realize the integration of diagnosis, treatment and medical record, and the treatment function includes CAM, red light flash, fine eyesight training, saccade movement, following movement, fusion vision and stereovision training; the training method of the patent considers improving the single-eye vision of the amblyopia of the patient and recovering the double-eye vision function of the patient, but does not relate to VR technology and has no relevant learning content in the training process.
The existing visual training basically divides two eyes through 3D glasses or red and blue glasses, and VR is not really applied to amblyopia treatment.
Disclosure of Invention
The invention solves the problems in the prior art and provides an optimized vision training learning system based on VR technology.
The technical scheme adopted by the invention is that the vision training learning system based on the VR technology is matched with VR glasses and a VR handle;
the system comprises:
the eyeball tracking module is used for logging in a user and continuously tracking the movement track of the eyeballs;
the training module is used for outputting visual training to the user and acquiring training data fed back by the user;
and the data module is used for storing data required by the system, acquiring information transmitted by the eyeball tracking module, analyzing and acquiring training data of the training module and setting output content of the training module.
Preferably, the eye tracking module comprises:
the eyeball identification unit is used for capturing eyeballs, acquiring identification data and submitting the identification data to the data module for matching when a user wears VR glasses and watches a display screen;
and the eyeball tracking unit is used for tracking the eyeball motion track after the user passes the identification and sending the eyeball motion track to the data module.
Preferably, the training module comprises:
the monocular vision enhancement unit is used for presenting preset data on the left display screen or the right display screen and carrying out visual training on a monocular, and comprises training content output and feedback information acquisition;
and the binocular vision training unit is used for presenting preset data on the left display screen and the right display screen and performing vision training on two eyes.
Preferably, the monocular vision enhancing unit includes a visual stimulation module, a saccade module, and a follow-up module.
Preferably, the data module comprises:
the data storage unit is used for downloading and storing the electronic version textbook and the out-of-class book of the corresponding user from the cloud server;
the data setting unit is used for displaying the data stored by the data storage module on a display screen of the VR glasses in a preset mode; the display contents of the left display screen and the right display screen of the VR eyes are different;
and the data analysis unit is used for analyzing the data of the eyeball tracking module and the training module in real time so as to adaptively control the training difficulty and generate the region of interest map.
Preferably, the data storage unit is further connected with a collection unit for a user to operate the VR glasses to enter a live-action mode, obtain pictures through a camera and a handle of the VR glasses, and store the pictures in the data storage module.
Preferably, the acquisition unit acquires the picture and then performs preprocessing, wherein the preprocessing comprises the following steps:
step A.1: taking a background image of the current picture, making a difference with the current picture, and reserving a foreground image;
step A.2: and carrying out image correction on the foreground image, wherein the image correction comprises inclination correction, shadow removal and exposure adjustment.
Preferably, if monocular vision enhancement training is performed, the data setting unit sets a left display screen or a right display screen corresponding to the amblyopia of the user to display training contents, the right display screen or the left display screen displays a background, and all training operations are completed by the amblyopia; if the user selects training data from the data storage unit, the scaling ratio of the electronic version textbook and the out-of-class book displayed on the left display screen or the right display screen is set according to the user vision, and the right display screen or the left display screen continuously displays the background;
if the binocular vision enhancement training is performed, the setting of the data setting unit comprises the following steps:
step B.1: identifying the graph parts in all document images in the data storage unit by using a trained two-classification neural network model for distinguishing characters and graphs in the pictures, recording the coordinate points of the upper left corner and the lower right corner of each graph by taking the upper left corner of the picture as an origin, storing the coordinate values into the output address of the data setting unit, and outputting the coordinate values as a left display screen and a right display screen;
step B.2: copying all document images in the data storage unit, and covering and storing the graph part corresponding to the step B.1 in the copied document images by using white pixel blocks;
step B.3: the image with the graph part covered by the white pixel block is horizontally divided, the image is divided according to lines by utilizing a horizontal projection blank gap caused by the blank gap between character lines, the upper left corner coordinate and the lower right corner coordinate of each line are recorded by taking the upper left corner of the image as an original point, and the coordinate values of all odd lines and even lines are respectively stored into the output address of the data setting module and are output as a left display screen and a right display screen.
Preferably, when performing binocular vision enhancement training, the training module acquires the configuration of the data setting unit, and the training includes the following steps:
step C.1: when the user opens the learning material, reading the coordinate values of the step B.1 and the step B.3 from the data setting unit through the address value;
step C.2: b.3, displaying the copied document image content corresponding to the odd row coordinate value on a left display screen or a right display screen of VR glasses according to a preset scaling, and displaying the copied document image content corresponding to the even row coordinate value on the left display screen or the right display screen of the VR glasses according to the preset scaling;
step C.3: and B.1, the original document image content corresponding to the graphic document coordinate value identified in the step B.1 is arranged beside the copy document image content corresponding to the odd line coordinate value or the even line coordinate value of the left display screen and the right display screen in an offset manner, and the offset distance is greater than 0.
Preferably, the analysis method of the data analysis unit comprises data analysis fed back by the training module, data analysis fed back by the eyeball tracking module and generation of a region-of-interest map;
the data analysis of the training module feedback comprises the following steps:
step D.1.1: recording the time of each training click, and increasing the rotating speed of the training module if the accuracy of the current click is increased and the reaction time is shortened compared with the last training; repeating the step D.1.1 until the training module reaches the maximum rotating speed, and carrying out the next step;
step D.1.2: if the user can still normally operate at the highest rotating speed, increasing the spatial frequency of the CAM bar under the condition of keeping the highest rotating speed;
step D.1.3: if the user can not normally operate after increasing the spatial frequency, keeping the spatial frequency of the current CAM bar grid, reducing the rotating speed until the rotating speed value suitable for the user is found, and returning to the step D.1.1;
in the data analysis fed back by the eyeball tracking module, the eyeball tracking module sends eyeball motion tracks to the data analysis unit in real time, the data analysis unit compares the received eyeball motion track data with a preset motion track, and if the deviation distance between the eyeball motion track of the user and the preset motion track is found to be larger than a threshold value, a light spot is flickered on a display screen of VR eyes to remind the user;
the generating of the region of interest map comprises the following steps:
step D.2.1: the training module sends the eyeball motion track recorded by the eyeball tracking module to the data analysis unit;
step D.2.2: the data analysis unit calculates an area with the highest repetition rate of the eyeball motion track data in the display screen according to the eyeball motion track data;
step D.2.3: and taking the current area as the area of interest of the user.
The invention provides an optimized vision training learning system based on VR technology, which integrates an eyeball tracking module for logging in a user and continuously tracking eyeball movement tracks, a training module for outputting vision training to the user and acquiring training data fed back by the user, and a data module for storing data required by the system, acquiring information transmitted by the eyeball tracking module, analyzing, acquiring the training data of the training module and setting the output content of the training module by matching VR glasses and a system arranged by a VR handle; the eyes are looked separately through VR glasses, the eyeball tracking module logs in a system, the training module is selected to train, the eyeball tracking module starts tracking the eyeball movement track, the training module records training data simultaneously, the eyeball movement track data and the training data are sent to the data module in real time, the data module controls the training difficulty degree of the training module through the analysis of the data, and visual training is carried out on a user through the matching of the above modules.
The invention divides the eyes by VR technique, overcomes the defect of eyesight damage of the dominant eye caused by covering the dominant eye and inhibiting the dominant eye in the traditional method, and can overcome the psychological adverse effect caused by adopting eye shield when covering the patient; by utilizing the immersion of the VR technology and the characteristic that the earphone carried by the VR technology has the noise reduction function, a patient can only see the content of the VR glasses display screen during training without being interfered by other users, so that the patient can train more attentively; the system is logged in through the eyeball tracking module, the complexity that the system is logged in by manually inputting a password by oneself is overcome, and a database belonging to the system is generated; through the cooperation between eyeball tracking module, data module and the training module, make the user carry out the visual training at the in-process of study simultaneously, solved among the prior art and did not relate to the problem of study in the visual training process.
Drawings
FIG. 1 is a block diagram of the system of the present invention;
FIG. 2 is a schematic diagram of an example configuration of the system of the present invention;
in fig. 1 and 2, arrows indicate the direction of data transmission.
Detailed Description
The present invention is described in further detail with reference to the following examples, but the scope of the present invention is not limited thereto.
The invention relates to a vision training learning system based on VR technology, which is matched with VR glasses and a VR handle;
the eyeball tracking module is used for logging in a user and continuously tracking the movement track of the eyeballs;
the training module is used for outputting visual training to the user and acquiring training data fed back by the user;
and the data module is used for storing data required by the system, acquiring information transmitted by the eyeball tracking module, analyzing and acquiring training data of the training module and setting output content of the training module.
In the invention, the VR all-in-one machine is the most convenient VR equipment and is convenient for users to use, a powerful chip and an ultra-high definition screen are arranged in the VR all-in-one machine, the limitation of external equipment is eliminated, the VR all-in-one machine can be independently used, and a 5G technology is adopted for bidirectional network transmission, so that the time delay is reduced and the communication capacity is increased.
According to the invention, the VR integrated machine comprises VR glasses and a VR handle, a left display screen, a right display screen, an infrared LED lamp, an eyeball tracking module, a data module and a training module are arranged in the VR glasses, a front camera and an earphone are arranged outside the VR glasses, the eyeball tracking module is connected with the data module and the training module, the front camera is connected with the data module, the earphone is arranged outside the VR glasses in order to reduce the interference of external noise on training, and the shortcoming that the eyeball identification technology is restricted to a certain extent when used in a dark environment is overcome; an infrared LED lamp is arranged beside the eyeball tracking module and used for overcoming the defect that the eyeball identification technology is restricted to a certain extent when used in a dark environment.
In the invention, a VR handle of the VR all-in-one machine can face a simulation laser pen to be in a 3D direction and is controlled by a key, a confirmation key and a switching key are arranged on the handle, a volume key is arranged on the side of the handle for adjusting the volume, and the switching key is used for switching the handle into a writable laser pen or a non-writable laser pen; when the user handle is switched into the laser pen capable of being handwritten, the laser pen can be used for marking and calculating the image.
In the invention, after the VR all-in-one machine is started, two display modes, namely a live-action mode and a virtual mode, are available, the virtual mode is entered by default, the training module is entered when the virtual mode is entered, a user selects an available handle to select a corresponding control to enter the live-action mode, and the control can be photographed through a confirmation key of the handle and stored in the data module.
The system comprises an eyeball tracking module, a training module and a data module, wherein the eyeball tracking module is used for logging in a user and continuously tracking the movement track of an eyeball, the training module is used for outputting visual training to the user and acquiring training data fed back by the user, and the data module is used for storing data required by the system, acquiring information transmitted by the eyeball tracking module, analyzing and acquiring the training data of the training module and setting the output content of the training module; the eyes are looked separately through VR glasses, the eyeball tracking module logs in a system, the training module is selected to train, the eyeball tracking module starts tracking the eyeball movement track, the training module records training data simultaneously, the eyeball movement track data and the training data are sent to the data module in real time, the data module controls the training difficulty degree of the training module through the analysis of the data, and visual training is carried out on a user through the matching of the above modules.
The eye tracking module comprises:
the eyeball identification unit is used for capturing eyeballs, acquiring identification data and submitting the identification data to the data module for matching when a user wears VR glasses and watches a display screen;
and the eyeball tracking unit is used for tracking the eyeball motion track after the user passes the identification and sending the eyeball motion track to the data module.
In the invention, each training only needs to take VR glasses and watch the front display screen, the eyeball identification unit captures the eyeballs, the eyeballs can be logged in by matching with the existing information of the data module, and then the eyeball tracking unit tracks the eyeballs.
In the invention, in the initialization process, personal information of a user, such as a computer or a mobile phone, is required to be input at a client side, such as a computer or a mobile phone, wherein the personal information comprises a name, a region, a cultural degree and left and right eye eyesight, the image of the user is collected, and the eyesight condition of the personal information is required to be updated after the user inspects the eyesight each time.
The training module comprises:
the monocular vision enhancement unit is used for presenting preset data on the left display screen or the right display screen and carrying out visual training on a monocular, and comprises training content output and feedback information acquisition;
the monocular vision enhancing unit includes a visual stimulation module, a saccade module, and a follow-up module.
And the binocular vision training unit is used for presenting preset data on the left display screen and the right display screen and performing vision training on two eyes.
The data module includes:
the data storage unit is used for downloading and storing the electronic version textbook and the out-of-class book of the corresponding user from the cloud server;
the data storage unit is further connected with a collection unit and used for enabling a user to operate the VR glasses to enter a live-action mode, obtain pictures through a camera and a handle of the VR glasses and store the pictures to the data storage module.
The image acquisition unit acquires an image and then carries out pretreatment, and the pretreatment comprises the following steps:
step A.1: taking a background image of the current picture, making a difference with the current picture, and reserving a foreground image;
step A.2: and carrying out image correction on the foreground image, wherein the image correction comprises inclination correction, shadow removal and exposure adjustment.
The data setting unit is used for displaying the data stored by the data storage module on a display screen of the VR glasses in a preset mode; the display contents of the left display screen and the right display screen of the VR eyes are different;
and the data analysis unit is used for analyzing the data of the eyeball tracking module and the training module in real time so as to adaptively control the training difficulty and generate the region of interest map.
In the invention, the data storage module stores the electronic version textbook and the out-of-class book of each department of each grade of each region, generally speaking, the storage mode is pictures, and the total data is stored in the database of the server because the memory is large; the data storage module of VR glasses only stores the corresponding electronic version textbook and the corresponding class book of minority of user's region, cultural degree, when needs update content, by VR end from the high in the clouds download can.
In the invention, the acquisition unit is mainly applied to an exercise module of the data storage module, a user can select a corresponding control to enable the VR glasses to enter a live-action mode, a front camera of the VR glasses displays a video shot in real time on a display screen of the VR glasses, the user can place the operation of the user on the day or other things which are wanted to be displayed on the VR glasses on the camera, adjust the angle to a proper angle, press a confirmation key of a handle, intercept the current frame of the video, store the current frame of the video into a picture and store the picture into the exercise module in the data storage module.
If the monocular vision enhancement training is carried out, the data setting unit sets a left display screen or a right display screen corresponding to the amblyopia eye of the user to display training content, the right display screen or the left display screen displays a background, and all training operations are finished by the amblyopia eye; if the user selects training data from the data storage unit, the scaling ratio of the electronic version textbook and the out-of-class book displayed on the left display screen or the right display screen is set according to the user vision, and the right display screen or the left display screen continuously displays the background;
if the binocular vision enhancement training is performed, the setting of the data setting unit comprises the following steps:
step B.1: identifying the graph parts in all document images in the data storage unit by using a trained two-classification neural network model for distinguishing characters and graphs in the pictures, recording the coordinate points of the upper left corner and the lower right corner of each graph by taking the upper left corner of the picture as an origin, storing the coordinate values into the output address of the data setting unit, and outputting the coordinate values as a left display screen and a right display screen;
step B.2: copying all document images in the data storage unit, and covering and storing the graph part corresponding to the step B.1 in the copied document images by using white pixel blocks;
step B.3: the image with the graph part covered by the white pixel block is horizontally divided, the image is divided according to lines by utilizing a horizontal projection blank gap caused by the blank gap between character lines, the upper left corner coordinate and the lower right corner coordinate of each line are recorded by taking the upper left corner of the image as an original point, and the coordinate values of all odd lines and even lines are respectively stored into the output address of the data setting module and are output as a left display screen and a right display screen.
When carrying out binocular vision enhancement training, the configuration of data setting unit is obtained to the training module, and the training includes the following steps:
step C.1: when the user opens the learning material, reading the coordinate values of the step B.1 and the step B.3 from the data setting unit through the address value;
step C.2: b.3, displaying the copied document image content corresponding to the odd row coordinate value on a left display screen or a right display screen of VR glasses according to a preset scaling, and displaying the copied document image content corresponding to the even row coordinate value on the left display screen or the right display screen of the VR glasses according to the preset scaling;
step C.3: and B.1, the original document image content corresponding to the graphic document coordinate value identified in the step B.1 is arranged beside the copy document image content corresponding to the odd line coordinate value or the even line coordinate value of the left display screen and the right display screen in an offset manner, and the offset distance is greater than 0.
In the invention, a monocular vision enhancement module is selected, training is started, training data is recorded, an eyeball tracking module tracks eyeball movement tracks, the training data and the eyeball movement track data are sent to a data analysis module in real time, and the training difficulty of the monocular vision enhancement module is controlled through a data analysis result; the binocular vision training module is selected, a user enters the data storage module to select learning materials, the materials can be displayed on the left display screen and the right display screen in a setting mode in the data setting module, and the complete learning materials can be seen only by the left eye and the right eye; meanwhile, the eyeball tracking module sends the eyeball motion track to the data analysis module. Through the cooperation of each module, training amblyopia through the monocular vision reinforcing module of training module, treat that amblyopia vision has certain promotion after, get into binocular vision training module, train binocular vision at the in-process of study on the limit.
In the invention, for monocular vision enhancement training, the display screen corresponding to the amblyopia eye displays detailed training content, the display screen corresponding to the amblyopia eye can only see the background, and all training operations are completed by the amblyopia eye; if the user enters the data storage module to select the learning materials of the user, the zoom ratio of the document image displayed on the display screen corresponding to the amblyopia eye is set according to the eyesight of the user, and the display screen corresponding to the dominant eye still displays the background. Monocular vision enhancement training is based on the MFBF (monocular fine training in binocular vision) principle, i.e. non-amblyopic eyes see peripheral views rather than details, while amblyopic eyes see central/detailed views after training; in MFBF training, the dominant eye is only temporarily inhibited in the center, the peripheral corresponding region is still working, and the amblyopic eye vision is improved while the simultaneous vision ability is enhanced.
In the present invention, based on the visual stimulation module, the saccade module, and the following module in the monocular vision enhancing unit, the following examples are given.
In the invention, the visual stimulation module is set as follows:
if the training is carried out for the first time, the rotating speed of the CAM and the bar space frequency are adjusted to the lowest level; if not, recording the training by the recorded rotating speed and the bar grating spatial frequency after the last training is finished;
the user is allowed to adapt to the rotating bar graph for one minute before each training, N red and/or blue and/or green points randomly appear on the rotating bar graph after one minute, the user points to the points by using a VR handle and can click by pressing a confirmation key of the handle, six times are a cycle, namely, one cycle is one cycle when all the points are clicked by the user;
the user is required to click the point which is flickering, if the point is clicked, the point is changed into another red and/or blue and/or green, and after all the points are clicked, the display screen is refreshed to generate random N points;
red and/or blue and/or green dots will appear 6 times in one cycle, the color will change and the color change will not reappear;
and recording the time of each point clicking of the user, and transmitting the time to a data module for data analysis.
In the invention, the CAM is a raster therapy principle, also called a visual biostimulation method, black and white raster plates with different frequencies are used as stimulation sources, and due to the rotation of the raster plates, the amblyopia eyes are stimulated by the raster plates with different spatial frequencies and contrasts in all directions, so that most of cortical cell receptors involved in the amblyopia eyes are stimulated exactly, and the aim of improving the eyesight is fulfilled. Wherein, the bar grating selects black and white with high contrast sensitivity to generate a bar grating graph with alternate black and white, and the bar grating graph has N different spatial frequencies (N > 1); the rotation speed is based on the medical principle, the rotation speed of the bar grating is 1-6 circles per minute, the screen is refreshed for 3-18 frames per second, and the rotation speed is less than 2 degrees per frame.
In the invention, the minimum CAM rotating speed is correspondingly set to be 3 frames per second, the maximum rotating speed is set to be 18 frames per second, and the rotating speed is uniformly changed according to M grades; the faster the rotation speed, the better the treatment effect.
In the invention, the glance module is set as follows:
n random red and/or blue and/or green points appear on the rotating bar graph, the frequency of each point appearing on the display screen is more than or equal to 1, the user can be reminded of clicking the point with which color in the display screen, if the point with the reminded color is clicked, the point disappears, and if the point is not clicked or the color of the clicked point is not the required color, the point can flash and an error prompting sound effect can remind the user.
In the invention, the following module is set as follows:
and a red or blue or green point appears on the blank background image, the color of the point appears randomly and in a cycle of three colors, and the point moves in a movement track set by a system and requires the eyes of a user to track the moving point.
In the present invention, eye movements are divided into three types: following, saccadic, gazing; follow means that the eye smoothly tracks the moving object; saccades refer to the jumping of eye fixations from one place to another; fixation refers to the eye's fixed focus at one point. In the flicker stimulation, when a flickering red, blue and green point is clicked, the point needs to be watched, and the point can be clicked, so that the watching ability of eyeball movement is exercised; in the above-mentioned glance training, a point on the bar graph needs to be swept to know which point is clicked and which point is not clicked, the point randomly appears in each place on the bar graph, and a user needs to see each place, up, down, left and right, which also trains peripheral perception; in the follow-up training, the movement of the user tracking point is required to exercise the follow-up ability of the eyeball, and when the user cannot follow the movement of the small ball, the point flickers to stimulate the pyramidal cells of the eye.
In the invention, no matter far-distance or near-distance targets are watched, the intersection points of the visual lines of the targets are always intersected on the targets, the image is formed in the fovea maculata, the cone cells of the macula part mainly capture colors, the cone cells comprise three cone cells, namely a blue cone, a green cone and a red cone, and the cone cells are respectively most sensitive to blue light, green light and red light. The scintillation can reach the purpose that visual excitation and conduction efficiency promoted, through the click to the red dot of scintillation, blue point, green point, after scintillation red light, blue light, green light carry out a stimulation to the amblyopia eye, macular portion cone cell can produce thermal effect and biochemical effect etc. improves local blood circulation and metabolism, reinforcing cone cell vigor to make eyesight improve.
In the invention, the time of the user for respectively clicking the red point, the blue point and the green point is recorded in order to observe whether the user is insensitive to which color and whether the color vision is abnormal or not through data, and the clicking time is recorded for many times so as to reduce the contingency of the data. If the recorded data shows that the time difference of clicking three colors by the user is large, the user is informed in time and is advised to go to a hospital for examination.
In the present invention, generally, the scaling of the left display screen and the right display screen in step c.2 is the same.
In the invention, in step c.3, the offset distance is not too large, and needs to be adjusted accordingly according to the scaling.
In the invention, a user can see the whole picture only by watching two glasses at the same time, the coordination ability of the two eyes of the user is exercised, the graph is shifted, the fusion ability of the user is exercised, the fusion not only combines two object images, but also has the ability of reflectively ensuring that the two object images are combined into a perception impression under the condition that the two object images deviate from the normal position, and the displacement amplitude of the retina object image which can cause the fusion reflection is called as the fusion range.
The analysis method of the data analysis unit comprises the steps of analyzing data fed back by the training module, analyzing data fed back by the eyeball tracking module and generating a region-of-interest map;
the data analysis of the training module feedback comprises the following steps:
step D.1.1: recording the time of each training click, and increasing the rotating speed of the training module if the accuracy of the current click is increased and the reaction time is shortened compared with the last training; repeating the step D.1.1 until the training module reaches the maximum rotating speed, and carrying out the next step;
step D.1.2: if the user can still normally operate at the highest rotating speed, increasing the spatial frequency of the CAM bar under the condition of keeping the highest rotating speed;
step D.1.3: if the user can not normally operate after increasing the spatial frequency, keeping the spatial frequency of the current CAM bar grid, reducing the rotating speed until the rotating speed value suitable for the user is found, and returning to the step D.1.1;
in the data analysis fed back by the eyeball tracking module, the eyeball tracking module sends eyeball motion tracks to the data analysis unit in real time, the data analysis unit compares the received eyeball motion track data with a preset motion track, and if the deviation distance between the eyeball motion track of the user and the preset motion track is found to be larger than a threshold value, a light spot is flickered on a display screen of VR eyes to remind the user;
the generating of the region of interest map comprises the following steps:
step D.2.1: the training module sends the eyeball motion track recorded by the eyeball tracking module to the data analysis unit;
step D.2.2: the data analysis unit calculates an area with the highest repetition rate of the eyeball motion track data in the display screen according to the eyeball motion track data;
step D.2.3: and taking the current area as the area of interest of the user.
In the invention, in the control of the spatial frequency and the rotating speed of the CAM bar grating, the time of training a click point each time is recorded, and if the accuracy of the click point is higher and the time is shorter and shorter, the rotating speed is increased by one gear; if the patient can still adapt to the maximum rotating speed, increasing the spatial frequency to the first gear at the maximum rotating speed; if the patient can not adapt to the spatial frequency increased at the highest rotating speed, the current spatial frequency is kept, the rotating speed is reduced, and a value suitable for the patient is found.
In the invention, if the trained patient selects the practice module to practice to make questions, the repeated rate of the eyeball motion trail can be used as the reference of the difficulty level of the questions.
The invention integrates an eyeball tracking module for logging in a user and continuously tracking the movement track of eyeballs, a training module for outputting visual training to the user and acquiring training data fed back by the user, and a data module for storing data required by the system, acquiring information transmitted by the eyeball tracking module, analyzing, acquiring the training data of the training module and setting the output content of the training module by matching with a system arranged by VR glasses and a VR handle; the eyes are looked separately through VR glasses, the eyeball tracking module logs in a system, the training module is selected to train, the eyeball tracking module starts tracking the eyeball movement track, the training module records training data simultaneously, the eyeball movement track data and the training data are sent to the data module in real time, the data module controls the training difficulty degree of the training module through the analysis of the data, and visual training is carried out on a user through the matching of the above modules.
The invention divides the eyes by VR technique, overcomes the defect of eyesight damage of the dominant eye caused by covering the dominant eye and inhibiting the dominant eye in the traditional method, and can overcome the psychological adverse effect caused by adopting eye shield when covering the patient; by utilizing the immersion of the VR technology and the characteristic that the earphone carried by the VR technology has the noise reduction function, a patient can only see the content of the VR glasses display screen during training without being interfered by other users, so that the patient can train more attentively; the system is logged in through the eyeball tracking module, the complexity that the system is logged in by manually inputting a password by oneself is overcome, and a database belonging to the system is generated; through the cooperation between eyeball tracking module, data module and the training module, make the user carry out the visual training at the in-process of study simultaneously, solved among the prior art and did not relate to the problem of study in the visual training process.

Claims (10)

1. The utility model provides a vision training learning system based on VR technique which characterized in that: the system is matched with VR glasses and a VR handle;
the system comprises:
the eyeball tracking module is used for logging in a user and continuously tracking the movement track of the eyeballs;
the training module is used for outputting visual training to the user and acquiring training data fed back by the user;
and the data module is used for storing data required by the system, acquiring information transmitted by the eyeball tracking module, analyzing and acquiring training data of the training module and setting output content of the training module.
2. The VR technology based vision training learning system of claim 1, wherein: the eye tracking module comprises:
the eyeball identification unit is used for capturing eyeballs, acquiring identification data and submitting the identification data to the data module for matching when a user wears VR glasses and watches a display screen;
and the eyeball tracking unit is used for tracking the eyeball motion track after the user passes the identification and sending the eyeball motion track to the data module.
3. The VR technology based vision training learning system of claim 1, wherein: the training module comprises:
the monocular vision enhancement unit is used for presenting preset data on the left display screen or the right display screen and carrying out visual training on a monocular, and comprises training content output and feedback information acquisition;
and the binocular vision training unit is used for presenting preset data on the left display screen and the right display screen and performing vision training on two eyes.
4. The VR technology based vision training learning system of claim 3, wherein: the monocular vision enhancing unit includes a visual stimulation module, a saccade module, and a follow-up module.
5. The VR technology based vision training learning system of claim 1, wherein: the data module includes:
the data storage unit is used for downloading and storing the electronic version textbook and the out-of-class book of the corresponding user from the cloud server;
the data setting unit is used for displaying the data stored by the data storage module on a display screen of the VR glasses in a preset mode; the display contents of the left display screen and the right display screen of the VR eyes are different;
and the data analysis unit is used for analyzing the data of the eyeball tracking module and the training module in real time so as to adaptively control the training difficulty and generate the region of interest map.
6. The VR technology based vision training learning system of claim 5, wherein: the data storage unit is further connected with a collection unit and used for enabling a user to operate the VR glasses to enter a live-action mode, obtain pictures through a camera and a handle of the VR glasses and store the pictures to the data storage module.
7. The VR technology based vision training learning system of claim 6, wherein: the image acquisition unit acquires an image and then carries out pretreatment, and the pretreatment comprises the following steps:
step A.1: taking a background image of the current picture, making a difference with the current picture, and reserving a foreground image;
step A.2: and carrying out image correction on the foreground image, wherein the image correction comprises inclination correction, shadow removal and exposure adjustment.
8. The VR technology based vision training learning system of claim 5, wherein:
if the monocular vision enhancement training is carried out, the data setting unit sets a left display screen or a right display screen corresponding to the amblyopia eye of the user to display training content, the right display screen or the left display screen displays a background, and all training operations are finished by the amblyopia eye; if the user selects training data from the data storage unit, the scaling ratio of the electronic version textbook and the out-of-class book displayed on the left display screen or the right display screen is set according to the user vision, and the right display screen or the left display screen continuously displays the background;
if the binocular vision enhancement training is performed, the setting of the data setting unit comprises the following steps:
step B.1: identifying the graph parts in all document images in the data storage unit by using a trained two-classification neural network model for distinguishing characters and graphs in the pictures, recording the coordinate points of the upper left corner and the lower right corner of each graph by taking the upper left corner of the picture as an origin, storing the coordinate values into the output address of the data setting unit, and outputting the coordinate values as a left display screen and a right display screen;
step B.2: copying all document images in the data storage unit, and covering and storing the graph part corresponding to the step B.1 in the copied document images by using white pixel blocks;
step B.3: the image with the graph part covered by the white pixel block is horizontally divided, the image is divided according to lines by utilizing a horizontal projection blank gap caused by the blank gap between character lines, the upper left corner coordinate and the lower right corner coordinate of each line are recorded by taking the upper left corner of the image as an original point, and the coordinate values of all odd lines and even lines are respectively stored into the output address of the data setting module and are output as a left display screen and a right display screen.
9. The VR technology-based vision training learning system of claim 8, wherein: when carrying out binocular vision enhancement training, the configuration of data setting unit is obtained to the training module, and the training includes the following steps:
step C.1: when the user opens the learning material, reading the coordinate values of the step B.1 and the step B.3 from the data setting unit through the address value;
step C.2: b.3, displaying the copied document image content corresponding to the odd row coordinate value on a left display screen or a right display screen of VR glasses according to a preset scaling, and displaying the copied document image content corresponding to the even row coordinate value on the left display screen or the right display screen of the VR glasses according to the preset scaling;
step C.3: and B.1, the original document image content corresponding to the graphic document coordinate value identified in the step B.1 is arranged beside the copy document image content corresponding to the odd line coordinate value or the even line coordinate value of the left display screen and the right display screen in an offset manner, and the offset distance is greater than 0.
10. The VR technology based vision training learning system of claim 5, wherein: the analysis method of the data analysis unit comprises the steps of analyzing data fed back by the training module, analyzing data fed back by the eyeball tracking module and generating a region-of-interest map;
the data analysis of the training module feedback comprises the following steps:
step D.1.1: recording the time of each training click, and increasing the rotating speed of the training module if the accuracy of the current click is increased and the reaction time is shortened compared with the last training; repeating the step D.1.1 until the training module reaches the maximum rotating speed, and carrying out the next step;
step D.1.2: if the user can still normally operate at the highest rotating speed, increasing the spatial frequency of the CAM bar under the condition of keeping the highest rotating speed;
step D.1.3: if the user can not normally operate after increasing the spatial frequency, keeping the spatial frequency of the current CAM bar grid, reducing the rotating speed until the rotating speed value suitable for the user is found, and returning to the step D.1.1;
in the data analysis fed back by the eyeball tracking module, the eyeball tracking module sends eyeball motion tracks to the data analysis unit in real time, the data analysis unit compares the received eyeball motion track data with a preset motion track, and if the deviation distance between the eyeball motion track of the user and the preset motion track is found to be larger than a threshold value, a light spot is flickered on a display screen of VR eyes to remind the user;
the generating of the region of interest map comprises the following steps:
step D.2.1: the training module sends the eyeball motion track recorded by the eyeball tracking module to the data analysis unit;
step D.2.2: the data analysis unit calculates an area with the highest repetition rate of the eyeball motion track data in the display screen according to the eyeball motion track data;
step D.2.3: and taking the current area as the area of interest of the user.
CN201911411287.6A 2019-12-31 2019-12-31 Vision training learning system based on VR technique Pending CN111202663A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911411287.6A CN111202663A (en) 2019-12-31 2019-12-31 Vision training learning system based on VR technique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911411287.6A CN111202663A (en) 2019-12-31 2019-12-31 Vision training learning system based on VR technique

Publications (1)

Publication Number Publication Date
CN111202663A true CN111202663A (en) 2020-05-29

Family

ID=70783332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911411287.6A Pending CN111202663A (en) 2019-12-31 2019-12-31 Vision training learning system based on VR technique

Country Status (1)

Country Link
CN (1) CN111202663A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112966983A (en) * 2021-04-12 2021-06-15 广东视明科技发展有限公司 Visual function processing timeliness capability evaluation system and method based on VR space

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2595341Y (en) * 2002-12-03 2003-12-31 程康 Sight function traning atlas for two eyes
CN201987843U (en) * 2010-11-22 2011-09-28 杭州华泰医疗科技有限公司 Virtual eye guard
CN204446276U (en) * 2015-01-08 2015-07-08 陈美琴 Dual-channel type amblyopia therapeutic equipment
CN105748268A (en) * 2016-02-18 2016-07-13 杭州睩客科技有限公司 Three-dimensional image system for treating eye diseases
CN205649486U (en) * 2016-01-28 2016-10-19 孙汉军 Eyes fuse detection training system of function
CN106406509A (en) * 2016-05-16 2017-02-15 上海青研科技有限公司 Head-mounted eye control virtual reality device
CN107105333A (en) * 2017-04-26 2017-08-29 电子科技大学 A kind of VR net casts exchange method and device based on Eye Tracking Technique
CN108205203A (en) * 2018-02-02 2018-06-26 刘程 A kind of e-book VR glasses
CN108478399A (en) * 2018-02-01 2018-09-04 上海青研科技有限公司 A kind of amblyopia training instrument
CN108733202A (en) * 2017-04-18 2018-11-02 北京传送科技有限公司 A kind of data compression method and its device based on eyeball tracking
CN108830943A (en) * 2018-06-29 2018-11-16 歌尔科技有限公司 A kind of image processing method and virtual reality device
CN109662873A (en) * 2018-12-12 2019-04-23 广州视景医疗软件有限公司 The method and its system of eye movement training based on VR
CN209004408U (en) * 2017-11-22 2019-06-21 湖北医达医疗器械有限公司 A kind of following strabismus surgery instrument
CN110292515A (en) * 2019-07-31 2019-10-01 北京浩瞳科技有限公司 A kind of method and system of visual function training

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2595341Y (en) * 2002-12-03 2003-12-31 程康 Sight function traning atlas for two eyes
CN201987843U (en) * 2010-11-22 2011-09-28 杭州华泰医疗科技有限公司 Virtual eye guard
CN204446276U (en) * 2015-01-08 2015-07-08 陈美琴 Dual-channel type amblyopia therapeutic equipment
CN205649486U (en) * 2016-01-28 2016-10-19 孙汉军 Eyes fuse detection training system of function
CN105748268A (en) * 2016-02-18 2016-07-13 杭州睩客科技有限公司 Three-dimensional image system for treating eye diseases
CN106406509A (en) * 2016-05-16 2017-02-15 上海青研科技有限公司 Head-mounted eye control virtual reality device
CN108733202A (en) * 2017-04-18 2018-11-02 北京传送科技有限公司 A kind of data compression method and its device based on eyeball tracking
CN107105333A (en) * 2017-04-26 2017-08-29 电子科技大学 A kind of VR net casts exchange method and device based on Eye Tracking Technique
CN209004408U (en) * 2017-11-22 2019-06-21 湖北医达医疗器械有限公司 A kind of following strabismus surgery instrument
CN108478399A (en) * 2018-02-01 2018-09-04 上海青研科技有限公司 A kind of amblyopia training instrument
CN108205203A (en) * 2018-02-02 2018-06-26 刘程 A kind of e-book VR glasses
CN108830943A (en) * 2018-06-29 2018-11-16 歌尔科技有限公司 A kind of image processing method and virtual reality device
CN109662873A (en) * 2018-12-12 2019-04-23 广州视景医疗软件有限公司 The method and its system of eye movement training based on VR
CN110292515A (en) * 2019-07-31 2019-10-01 北京浩瞳科技有限公司 A kind of method and system of visual function training

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112966983A (en) * 2021-04-12 2021-06-15 广东视明科技发展有限公司 Visual function processing timeliness capability evaluation system and method based on VR space
CN112966983B (en) * 2021-04-12 2021-09-21 广东视明科技发展有限公司 Visual function processing timeliness capability evaluation system and method based on VR space

Similar Documents

Publication Publication Date Title
CN104603673B (en) Head-mounted system and the method for being calculated using head-mounted system and rendering digital image stream
CN205903239U (en) Visual acuity test and trainer based on virtual reality
CN108427503B (en) Human eye tracking method and human eye tracking device
JP4421903B2 (en) Eye training device
CN107209851A (en) The real-time vision feedback positioned relative to the user of video camera and display
CN105992965A (en) Stereoscopic display responsive to focal-point shift
CN104306102A (en) Head wearing type vision auxiliary system for patients with vision disorder
WO2003079272A1 (en) Materials and methods for simulating focal shifts in viewers using large depth of focus displays
US10376439B2 (en) Audio-feedback computerized system and method for operator-controlled eye exercise
US20070146631A1 (en) System and method for analysis and visualization of metamorphopsia through three dimensional scene regeneration and testing of vision thereby
Fornos et al. Simulation of artificial vision, III: do the spatial or temporal characteristics of stimulus pixelization really matter?
CN104865701B (en) Head Mounted Display Apparatus
CN106249407A (en) Prevention and the system of myopia correction
CN110292515A (en) A kind of method and system of visual function training
CN201929941U (en) Hemispheric stimulating vision function diagnosis and treatment instrument
CN108478399B (en) Amblyopia training instrument
CN107307981B (en) Control method of head-mounted display device
CN111202663A (en) Vision training learning system based on VR technique
CN105943327A (en) Vision-exercising health caring system with anti-dizziness device
Kollenberg et al. Visual search in the (un) real world: how head-mounted displays affect eye movements, head movements and target detection
CN107260506B (en) 3D vision training system, intelligent terminal and head-mounted device based on eye movement
CN107291233B (en) Wear visual optimization system, intelligent terminal and head-mounted device of 3D display device
CN107065198B (en) Wear the vision optimization method of display equipment
Grove The psychophysics of binocular vision
Boyle et al. Challenges in digital imaging for artificial human vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination