CN110897841A - Visual training method, visual training device, and storage medium - Google Patents

Visual training method, visual training device, and storage medium Download PDF

Info

Publication number
CN110897841A
CN110897841A CN201910859432.0A CN201910859432A CN110897841A CN 110897841 A CN110897841 A CN 110897841A CN 201910859432 A CN201910859432 A CN 201910859432A CN 110897841 A CN110897841 A CN 110897841A
Authority
CN
China
Prior art keywords
training
image
trained
person
training image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910859432.0A
Other languages
Chinese (zh)
Inventor
郭君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Muxin Education Co Ltd
Original Assignee
Muxin Education Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Muxin Education Co Ltd filed Critical Muxin Education Co Ltd
Priority to CN201910859432.0A priority Critical patent/CN110897841A/en
Publication of CN110897841A publication Critical patent/CN110897841A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H5/00Exercisers for the eyes
    • A61H5/005Exercisers for training the stereoscopic view
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2205/00Devices for specific parts of the body
    • A61H2205/02Head
    • A61H2205/022Face
    • A61H2205/024Eyes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Rehabilitation Therapy (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Ophthalmology & Optometry (AREA)
  • Epidemiology (AREA)
  • Pain & Pain Management (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Computer Hardware Design (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Rehabilitation Tools (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention provides a visual training method, a visual training device and a storage medium. The visual training method is characterized in that virtual reality display equipment is used for training, the virtual reality display equipment is provided with a first display screen and a second display screen which are used for displaying contents to two eyes of a person to be trained respectively, a first training image is displayed on the first display screen, a second training image is displayed on the second display screen, the first training image and the second training image are provided with background images, and the first training image and/or the second training image are provided with speech manuscript contents. According to the visual angle training method, the visual training device and the storage medium provided by the embodiment of the invention, the first training image and the second training image are displayed for the person to be trained, so that the speech training effect is better.

Description

Visual training method, visual training device, and storage medium
Technical Field
The invention belongs to the technical field of visual training, and particularly relates to a visual training method, a visual training device and a storage medium.
Background
The speech plays an important role in learning and working of people, so that many people often carry out speech training. The existing training mode is that a trainer trains in a room alone, but the scene during actual speech is greatly different from the scene during training, so that the trainer is easy to generate nervous emotion during actual speech and cannot play well.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a visual training method, a visual training device and a storage medium capable of improving the speech training effect.
The invention provides a visual training method, which is trained by using a virtual reality display device, wherein the virtual reality display device is provided with a first display screen and a second display screen which are used for displaying contents to two eyes of a person to be trained respectively, a first training image is displayed on the first display screen, a second training image is displayed on the second display screen, the first training image and the second training image are both provided with background images, and the first training image and/or the second training image are also provided with lecture manuscript contents.
As a further improvement of the above embodiment, the first training image and the second training image each have scene content of a speaker perspective, and the visual training method further includes a step of making the scene content change in association with a performance of a lecture of the person to be trained.
As a further improvement of the above embodiment, the scene content includes a virtual audience, and the visual training method further includes a step of making the virtual audience to clap or make a sound in association with the performance of the speech of the person to be trained.
As a further improvement of the above embodiment, the first training image is a weakened image compared to the second training image, or the first training image and the second training image are alternately weakened images.
As a further improvement of the above embodiment, the first training image is a blurred image, the second training image is a clear image, and the second training image has the lecture contents.
In another aspect, an embodiment of the present invention provides a visual training apparatus, which includes a first display screen and a second display screen for displaying content to both eyes of a person to be trained, respectively, where the first display screen displays a first training image, the second display screen displays a second training image, both the first training image and the second training image have background images, and the first training image and/or the second training image also have lecture manuscript content.
As a further improvement of the above embodiment, the first training image and the second training image each have scene content of a perspective of a speaker, and the visual training apparatus further makes the scene content change in association with a presentation performance of the person to be trained.
As a further improvement of the above embodiment, the scene content includes a virtual audience, and the visual training apparatus further makes the virtual audience to clap or make a sound associated with the performance of the speech of the person to be trained.
As a further improvement of the above embodiment, the first training image is a weakened image compared to the second training image, or the first training image and the second training image are alternately weakened images.
Yet another aspect of the embodiments of the present invention provides a storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the visual training method according to any one of the above embodiments.
According to the visual angle training method, the visual training device and the storage medium provided by the embodiment of the invention, the first training image and the second training image are displayed for the person to be trained, so that the speech training effect is better.
Drawings
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings. Like reference numerals refer to like parts throughout the drawings, and the drawings are not intended to be drawn to scale in actual dimensions, emphasis instead being placed upon illustrating the principles of the invention.
Fig. 1 is a schematic structural diagram of a vision training apparatus according to embodiment 1 of the present invention;
fig. 2 and 3 are schematic diagrams of the first training image and the second training image in two states.
Detailed Description
The technical solutions of the present invention are further described in detail with reference to specific examples so that those skilled in the art can better understand the present invention and can implement the present invention, but the examples are not intended to limit the present invention.
Referring to fig. 1-3, an embodiment of the present invention provides a visual training method, which is trained by using a virtual reality display device and can provide lecture training for a person to be trained. The virtual reality display equipment is provided with a first display screen and a second display screen which are used for displaying contents to two eyes of a person to be trained respectively, a first training image is displayed on the first display screen, a second training image is displayed on the second display screen, background images are arranged in the first training image and the second training image, and lecture manuscript contents are also arranged in the first training image and/or the second training image. Virtual reality display device can be virtual reality wear-type display device, also is VR glasses, is called VR eye-shade, VR helmet etc. again, and the training image can be static image, also can be the dynamic video. The first training video and the second training video each have a background video, such as an image or a video related to speech content, or an image or a video of a perspective of a speaker, such as a speech hall or an auditorium, and the first training video and/or the second training video further have speech manuscript content for providing a speech prompt to the person to be trained. The speech manuscript content can be all speech lines or only speech synopsis or keywords. Show first training image and second training image through virtual reality display device, can make the speech training effect better.
In a preferred embodiment, the first training image and the second training image each have scene content of a perspective of a speaker, and the visual training method further includes a step of causing the scene content to make a change associated with a presentation performance of the person to be trained. The scene content is an embodiment of the background image. The scene content to be referred to makes a change related to the presentation performance of the trainee, which may be a color change, a motion change, etc. of a virtual property in the presentation field image (for example, a virtual indicator light is turned on), or a virtual firework added to be set off in the scene content, or a virtual audience makes a clapping motion or gives out booms and hists, etc. In some embodiments, when the person to be trained performs visual training and gives a speech, the assistant (e.g. a doctor or a friend) nearby may send a corresponding control instruction to the virtual reality display device according to the speech performance (e.g. speech accuracy, speech typhoon, etc.) of the person to be trained, and the virtual reality display device, after receiving the control instruction, makes the scene content make a change corresponding to the speech performance of the person to be trained. In other embodiments, the virtual reality display device may automatically detect and evaluate the performance of the speech of the person to be trained, and then control the scene content to make a change corresponding to the performance of the speech of the person to be trained. For example, a microphone and a voice recognition system may be disposed in the virtual reality display device, and perform voice recognition on the speech content of the person to be trained, compare the speech content with the speech manuscript content, and calculate the speech accuracy, or calculate the speech speed of the person to be trained, and accordingly evaluate the speech performance of the person to be trained, and output a corresponding control instruction for changing the scene content according to the evaluation result. Also can set up image device near first display screen and second display screen among the virtual reality display device, detect the eyeball position of being trained person, there is the condition of strabismus when detecting being trained person when the speech, then virtual spectator can send out hissing in the scene content, or does not have the condition of strabismus when detecting being trained person when the speech, then virtual spectator can carry out the clapping in the scene content.
In a further preferred embodiment, the scene content comprises a virtual audience, and the visual training method further comprises the step of making the virtual audience clap or make a sound in association with the performance of the speech of the person to be trained. The sound may be emitted through speakers provided on the virtual reality display device. Other specific implementation manners have been described in the above embodiments, and are not described herein again.
In a preferred embodiment, the first training image is a weakened image compared to the second training image. The method can be used to correct eye problems such as amblyopia and strabismus. Amblyopia refers to the condition of no obvious organic lesions in the eyeball, and amblyopia refers to the condition of the vision correction of one or both eyes still not reaching 0.8. The causes of amblyopia are mainly visual suppression and form deprivation. In the process of binocular vision, a competitive mechanism exists in visual centers, binocular visual input is unequal due to the form deprivation, the eyes with more visual information input compete with the eyes with less visual information input, the eyes with more visual information input become dominant eyes, and the eyes with less visual information input become inferior eyes and are inhibited, so that the visual development is retarded. Strabismus (squint) refers to the condition that two eyes cannot watch the target at the same time, and belongs to extraocular muscle diseases, and can be divided into two categories, namely common strabismus and paralytic strabismus. The common strabismus is characterized in that the eyeball does not have dyskinesia, and the strabismus of the first eye position and the second eye position are equal as the main clinical characteristics; paralytic strabismus has restricted eyeball movement, and diplopia can be congenital, traumatic or systemic disease. Amblyopia and strabismus can be corrected by visual training. When the prior art is used for vision training and correction, two eyes are trained mainly through a covering method, namely one eye is trained and the other eye is covered during training. Because only one eye can see the training image during training, and the condition that both eyes can see the outside is different from the condition that both eyes can see the outside during actual life, eye muscles can not form better conditioned reflex, so that the correction effect seems good during training, but the eye muscles can be easily restored to the state before correction when in actual life.
When the first training image is used for training the person to be trained with amblyopia or strabismus in a mode of weakening the image compared with the second training image, the effect of correction training can be effectively improved. Specifically, in use, the second training image displayed on the second display screen corresponding to one eye to be corrected (for example, the left eye) of the person to be trained is a normal corrected training image, and the first training image displayed on the first display screen corresponding to the other eye is a weakened image (compared to the second training image). Generally, the contents of the first training image and the second training image are substantially identical. The first training image is a weakened image compared with the second training image, and means that the degree of stimulation of the first training image to the eyes is weaker than the degree of stimulation of the second training image to the eyes. For example, the first training image is a blurred image compared with the second training image, or the first training image has a lower brightness or contrast than the second training image, or the second training image has more visual contents than the first training image, such as a lecture content or a moving image added to the second training image. During training, because the content seen by the trained eyes is a normal training image and the content seen by the other eye is a weakened image, the training of the trained eyes can be enhanced by the trained person, and a better correction effect is achieved. And because training image can be seen to both eyes during training, consequently can form the muscle memory of eyes with the scene that more is close to real life, when being met similar scene by the person of training in real life, its eye muscle can the conditioned reflex, and eyes demonstrate the state when correcting the training, consequently the correction effect can last for a long time, and the effect is better.
In other embodiments, the first training image and the second training image alternate as a weakened image. Since both eyes of some of the person to be trained need to be corrected, the first training image and the second training image can be alternately weakened to correct both eyes, that is, the first training image is a weakened image compared with the second training image in a period of time, and the second training image is a weakened image compared with the first training image in the next period of time. The virtual reality display device may be provided with a control button, and when the control button is pressed, the first training image and the second training image are alternately weakened and switched once, that is, if the first training image is a weakened image before the first training image, the second training image becomes a weakened image after switching. Or when the time length of the first training image or the second training image being the weakened image reaches a preset value, the first training image and the second training image are automatically switched in an alternating weakening manner, for example, when the time length of the first training image being the weakened image compared with the second training image reaches 30 minutes (which is equivalent to that one eye of the person to be trained corresponding to the second training image is strengthened and trained for 30 minutes), the control system automatically changes the second training image into the weakened image, and the first training image becomes the normal training image.
In a preferred embodiment, the first training image is a blurred image, the second training image is a clear image, and the second training image has lecture content. Referring to fig. 3, the training image of the left eye is a blurred image, the training image of the right eye is a clear image, and the training image of the right eye has lecture contents. In a preferred embodiment, the vision training method further comprises the step of adjusting the degree of blur of the first training image. In a preferred embodiment, the degree of blur of the blurred image is associated with the degree of vision of the eye of the person to be trained corresponding to the first training image. That is, the poorer the vision of the eye of the person to be trained corresponding to the first training image, the higher the blur degree of the blurred image. For example, if the vision of the person to be trained is only 0.5, the blur level of the blurred image may be 50%, and if the vision is 0.8, the blur level may be 20%, which is closer to what the person to be trained sees in normal vision and the training effect is better.
In a preferred embodiment, the vision training method further comprises the steps of:
receiving voice feedback of a person to be trained;
comparing the voice feedback of the person to be trained with the speech manuscript content to obtain the training accuracy of the person to be trained;
and if the training accuracy of the person to be trained does not meet the preset requirement, punishing the person to be trained or sending warning information to the person to be trained.
Specifically, the voice of the person to be trained can be received by the microphone, the voice is recognized, the recognized result is compared with the speech manuscript content, if the misreading rate (i.e. the ratio of the number of wrong words to the total number of words) of the speech manuscript content of the person to be trained exceeds a predetermined value or the number of wrong words exceeds a predetermined value, the training accuracy of the person to be trained is judged to be not meeting the preset requirement, at this time, a punishment of stopping training or a warning sound such as "please concentrate on training" can be given, and the person to be trained can train more intensively.
The vision training method may further include the steps of: and acquiring an image of a real environment, and combining the virtual image with the image of the real environment to obtain a first training image and a second training image. Specifically, the image of the real environment in front of the person to be trained can be acquired through the camera on the virtual Reality display device or the camera of the external device connected with the virtual Reality display device, and is combined with the virtual image to obtain the first training image and the second training image, so that the first training image and the second training image are Augmented Reality (AR) images, the content of the person to be trained seen in normal vision is closer, and the training effect is better. For example, a real image of the training environment in which the person to be trained is located can be obtained, and virtual audiences and lecture contents can be added to the image.
The vision training method may further include the steps of: and displaying the color blindness and/or color weakness correction image on the first display screen and/or the second display screen. By displaying the image for color blindness and/or color weakness correction, the patient with color blindness or color weakness can also be trained. Specifically, the color blindness (e.g., red-green-color blindness) or color weakness model may be simulated, the original image may be subjected to a step processing, a mapping table from a normal color space to a color blindness or color weakness color surface may be established, and the original image may be corrected according to the mapping table to obtain a color blindness or color weakness corrected image.
The vision training method may further include the steps of: an offset angle is applied to the second training image relative to the first training image. For a patient with strabismus, the images of both eyes are different from those of a normal person, and the images of the strabismus eyes are very different from those of the normal eyes, so that the brain selects a reasonable image generated by one normal eye and suppresses the image generated by the other strabismus eye. Therefore, when the person to be trained is a strabismus patient, the training image (i.e., the second training image) corresponding to the strabismus eye (the trained eye) can be applied with the same deviation angle as the strabismus direction, so that the difference between the image content fed back to the brain by the strabismus eye and the image content fed back to the brain by the other eye (the eye corresponding to the first training image) is smaller, and the strabismus eye is not inhibited. The deviation angle of the second training image may be smaller than the strabismus angle of the trained eye, thereby correcting the visual angle direction of the strabismus eye to the normal visual angle direction. The setting of the deviation angle may be determined based on the results of the examination of the eyes of the person to be trained by the doctor or optometrist.
In a further preferred embodiment, the vision training method further comprises the steps of: the angle of the offset angle is adjusted during the training process. Specifically, the angle of the deviation angle may be adjusted by an assistant for training, may be adjusted based on feedback (including voice feedback) of a person to be trained, or may be automatically adjusted, for example, by gradually decreasing the deviation angle, so as to gradually correct the viewing angle direction of the squint eyes toward the normal viewing angle direction.
The vision training method may further include the steps of:
displaying image contents with different virtual distances on the first display screen and/or the second display screen;
receiving the voice feedback of the trainee to the image content;
comparing the voice feedback of the person to be trained with the image content to obtain a test result;
and adjusting the image content according to the test result.
For patients with strabismus or amblyopia, the visual perception is very poor. Therefore, the first display screen and/or the second display screen display the image contents with different virtual distances (which can be different distances from front to back, or different distances from left to right), such as characters or figures, to train and test the person with strabismus or amblyopia, the image contents with different virtual distances can be simultaneously displayed in the same image frame, or sequentially displayed in different image frames, the person to be trained needs to read the characters or figures, then receives the voice of the person to be trained through the microphone, recognizes the voice, compares the recognized result with the image contents to judge whether the person to be trained answers the training questions correctly, if the answer is correct or the answer rate to a plurality of questions exceeds a predetermined value, can provide more difficult stereoscopic training, if incorrect or the answer rate to a plurality of questions is lower than a predetermined value, the difficulty rating of the training topic is left at the existing rating or is lowered by one rating. In other embodiments, if the answer is correct or the correct rate of answers to a plurality of continuous topics exceeds a predetermined value, images such as congratulations, cheers and the like may be displayed on the first display screen and/or the second display screen, or an option prompt indicating whether to enter a higher level may be displayed, if the answer is yes, the higher level may be entered, and correspondingly, if the answer is incorrect or the correct rate of answers to a plurality of continuous topics is lower than the predetermined value, images such as continuous refueling and the like may be displayed, and details thereof are not repeated herein.
Referring to fig. 1, an embodiment of the present invention further provides a visual training apparatus, which includes a first display screen 1 and a second display screen 2 for displaying contents to two eyes of a person to be trained, respectively, and further includes a controller 3 and an input device 4, wherein the controller 3 is connected to the first display screen 1, the second display screen 2 and the input device 4. The input device 4 may be a key or a touch screen, and may input a control instruction in response to an operation by a user. The controller 3 may control the display contents of the first display screen 1 and the second display screen 2. This vision training device can be virtual reality head-mounted display equipment, also be VR glasses. The visual training device displays a first training image on the first display screen 1, while displaying a second training image on the second display screen 2. The first training image and the second training image both have background images, and the first training image and/or the second training image also have lecture contents. The background video may be an image, a video, or the like related to the content of the lecture, or may be an image, a video, or the like of the perspective of the lecturer such as a lecture hall, an auditorium, or the like. The lecture contents are used to provide speech prompts to the person to be trained. The speech manuscript content can be all speech lines or only speech synopsis or keywords. The first training image and the second training image are displayed through the visual training device, so that the speech training effect is better.
In a preferred embodiment, the first training image and the second training image each have scene content of a perspective of the speaker, and the visual training apparatus further causes the scene content to make a change associated with the performance of the speech of the person to be trained. The specific implementation manner is the same as the embodiment of the above-mentioned visual training method, and details are not repeated here. In a further preferred embodiment, the visual training apparatus may be provided with a microphone 5 and a voice recognition system, and the microphone and the voice recognition system perform voice recognition on the speech content of the person to be trained, and compare the speech content with the speech manuscript content to calculate speech accuracy, or may calculate the speech speed of the person to be trained, thereby evaluating the speech performance of the person to be trained, and outputting a corresponding control instruction for changing the scene content according to the evaluation result.
In a preferred embodiment, the scene content comprises a virtual audience, and the visual training apparatus further causes the virtual audience to clap or make a sound in association with the performance of the speech of the person to be trained. The vision training device may be provided with a speaker 6, and sound may be emitted through the speaker 6. Other specific implementation manners are the same as those of the embodiment of the visual training method, and are not described herein again.
In a preferred embodiment, the first training image is a weakened image compared to the second training image, or the first training image and the second training image are alternately weakened images. The specific implementation manner is the same as the embodiment of the above-mentioned visual training method, and details are not repeated here.
In a preferred embodiment, the first training image is a blurred image, the second training image is a clear image, and the second training image has lecture content. Referring to fig. 3, the training image of the left eye is a blurred image, the training image of the right eye is a clear image, and the training image of the right eye has lecture contents. In a further preferred embodiment, the vision training device may further adjust the degree of blur of the first training image. Can set up the button of adjusting fuzzy degree on the vision training device, by training person or auxiliary personnel can be according to the eyes situation of being trained person, through the fuzzy degree of this button of operation come the first training image of regulation. In a preferred embodiment, the degree of blur of the blurred image is associated with the degree of vision of the eye of the person to be trained corresponding to the first training image. That is, the poorer the vision of the eye of the person to be trained corresponding to the first training image, the higher the blur degree of the blurred image. For example, if the vision of the person to be trained is only 0.5, the blur level of the blurred image may be 50%, and if the vision is 0.8, the blur level may be 20%, which is closer to what the person to be trained sees in normal vision and the training effect is better.
In a preferred embodiment, the visual training apparatus receives voice feedback from the person to be trained; comparing the voice feedback of the person to be trained with the speech manuscript content to obtain the training accuracy of the person to be trained; and if the training accuracy of the person to be trained does not meet the preset requirement, punishing the person to be trained or sending warning information to the person to be trained. Specifically, the visual training device may be provided with a microphone 5 and a voice recognition system, and perform voice recognition on the voice sent by the person to be trained through the microphone 5 and the voice recognition system, and compare the recognized result with the content of the lecture manuscript, if the error rate (i.e., the ratio of the number of wrong words to the total number of words) of the content of the lecture manuscript by the person to be trained exceeds a predetermined value or the number of wrong words exceeds a predetermined value, it is determined that the training accuracy of the person to be trained does not meet the preset requirement, and at this time, a penalty of stopping training or a warning sound such as "please concentrate on training" may be made, so that the person to be trained can train more intensively.
In a preferred embodiment, the vision training device may further comprise an image capturing device, which may be a camera, which may be mounted in front of the housing of the vision training device, the image capturing device being connected to the controller 3. The image shooting device can obtain the image of the real environment, the controller 3 can combine the image of the real environment obtained by the image shooting device with the virtual image to obtain a first training image and a second training image, so that the first training image and the second training image are Augmented Reality (AR) images, the content of the images is closer to the content of the images seen by the normal vision of the trainee, and the training effect is better.
In a preferred embodiment, the vision training device further displays an image for color blindness and/or color weakness correction on the first display screen and/or the second display screen. By displaying the image for color blindness and/or color weakness correction, the patient with color blindness or color weakness can also be trained. Specifically, the color blindness (e.g., red-green-color blindness) or color weakness model may be simulated, the original image may be subjected to a step processing, a mapping table from a normal color space to a color blindness or color weakness color surface may be established, and the original image may be corrected according to the mapping table to obtain a color blindness or color weakness corrected image.
In a preferred embodiment, the vision training device applies an offset angle to the second training image relative to the first training image. In a further preferred embodiment, the vision training device also adjusts the angle of the offset angle during the training process. How to apply the offset angle to the second training image and how to adjust the angle of the offset angle are the same as those in the corresponding embodiments of the aforementioned visual training method, and are not described herein again.
In a preferred embodiment, the vision training device may be provided with a microphone 5 and a speech recognition system, by means of which microphone 5 and speech recognition system speech recognition of the speech uttered by the person to be trained is performed. The visual training device can display image contents with different virtual distances on the first display screen and/or the second display screen, then receive voice feedback of a person to be trained on the image contents through the microphone 5, perform voice recognition, and then compare the voice feedback of the person to be trained with the image contents to obtain a test result; and adjusting the image content according to the test result. How to adjust the image content according to the test result is the same as the corresponding embodiment of the aforementioned visual training method, and is not repeated here.
The present invention also provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the visual training method described in any of the above embodiments. The computer program may be an APP. The storage medium of the present embodiment is, for example, a floppy disk, an optical disk, a DVD, a hard disk, a flash memory, a U disk, a CF card, an SD card, an MMC card, an SM card, a memory stick (MemoryStick), an xD card, or the like.
The visual angle training method, the visual training device and the storage medium provided by the embodiment of the invention can help the trainee to improve the speech level. In the preferred embodiment, the device can help the person to be trained to form eye muscle memory closer to the actual life, when the person to be trained encounters a similar scene in the actual life, the eye muscle can be conditioned and the eyes are in the state of the correction training, so that the correction effect can be continued for a long time and the effect is better.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by the present specification, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (12)

1. A visual training method is trained by using a virtual reality display device, wherein the virtual reality display device is provided with a first display screen and a second display screen which are used for displaying contents to two eyes of a person to be trained respectively, a first training image is displayed on the first display screen, and a second training image is displayed on the second display screen.
2. The visual training method as set forth in claim 1, wherein the first training image and the second training image each have scene content of a speaker's perspective, the visual training method further comprising the step of causing the scene content to make a change associated with a performance of a lecture by a trainee.
3. A visual training method as defined in claim 2, wherein said scene content comprises a virtual audience, said visual training method further comprising the step of making said virtual audience clap or make a sound in association with the performance of the speech of the person to be trained.
4. The vision training method of claim 1, wherein the first training image is a weakened image compared to the second training image, or the first training image and the second training image are alternately weakened images.
5. The vision training method of claim 1, wherein the first training image is a blurred image, the second training image is a clear image, and the second training image has the lecture content.
6. The vision training method of claim 1, further comprising the steps of:
receiving voice feedback of a person to be trained;
comparing the voice feedback of the person to be trained with the speech manuscript content to obtain the training accuracy of the person to be trained;
and if the training accuracy of the person to be trained does not meet the preset requirement, punishing the person to be trained or sending warning information to the person to be trained.
7. A visual training device comprises a first display screen and a second display screen which are used for displaying contents to two eyes of a person to be trained respectively, wherein the first display screen displays a first training image, the second display screen displays a second training image, the visual training device is characterized in that the first training image and the second training image both have background images, and the first training image and/or the second training image also have speech manuscript contents.
8. The vision training apparatus of claim 7, wherein the first training image and the second training image each have scene content of a speaker's perspective, the vision training apparatus further making the scene content change in association with the performance of the lecture of the person to be trained.
9. The vision training apparatus of claim 7, wherein the scene content comprises a virtual audience, and the vision training apparatus further causes the virtual audience to clap or make a sound associated with the performance of the speech of the person being trained.
10. The vision training apparatus of claim 7, wherein the first training image is a weakened image compared to the second training image, or the first and second training images are alternated with weakened images.
11. The visual training apparatus of claim 7, wherein the visual training apparatus receives the voice feedback of the person to be trained, compares the voice feedback of the person to be trained with the content of the lecture document to obtain the training accuracy of the person to be trained, and penalizes the person to be trained or sends out warning information to the person to be trained if the training accuracy of the person to be trained does not meet a preset requirement.
12. A storage medium having stored thereon a computer program, characterized in that the computer program, when being executed by a processor, carries out the vision training method of any one of claims 1-6.
CN201910859432.0A 2019-09-11 2019-09-11 Visual training method, visual training device, and storage medium Pending CN110897841A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910859432.0A CN110897841A (en) 2019-09-11 2019-09-11 Visual training method, visual training device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910859432.0A CN110897841A (en) 2019-09-11 2019-09-11 Visual training method, visual training device, and storage medium

Publications (1)

Publication Number Publication Date
CN110897841A true CN110897841A (en) 2020-03-24

Family

ID=69814449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910859432.0A Pending CN110897841A (en) 2019-09-11 2019-09-11 Visual training method, visual training device, and storage medium

Country Status (1)

Country Link
CN (1) CN110897841A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111939012A (en) * 2020-07-24 2020-11-17 光朗(海南)生物科技有限责任公司 Visual training method and device
CN115643395A (en) * 2022-12-23 2023-01-24 广州视景医疗软件有限公司 Visual training method and device based on virtual reality

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105832503A (en) * 2016-03-17 2016-08-10 广东小天才科技有限公司 Picture output method and device for strabismus correction
CN107669455A (en) * 2017-11-17 2018-02-09 广州视景医疗软件有限公司 A kind of vision training method, device and equipment
CN207253468U (en) * 2017-03-23 2018-04-20 郑州诚优成电子科技有限公司 Learning type augmented reality eyesight convalescence device
CN207253467U (en) * 2017-03-23 2018-04-20 郑州诚优成电子科技有限公司 Learning type virtual reality eyesight convalescence device
CN108074431A (en) * 2018-01-24 2018-05-25 杭州师范大学 A kind of system and method using VR technologies speech real training
KR101971937B1 (en) * 2018-11-08 2019-04-24 주식회사 휴메닉 Mixed reality-based recognition training system and method for aged people

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105832503A (en) * 2016-03-17 2016-08-10 广东小天才科技有限公司 Picture output method and device for strabismus correction
CN207253468U (en) * 2017-03-23 2018-04-20 郑州诚优成电子科技有限公司 Learning type augmented reality eyesight convalescence device
CN207253467U (en) * 2017-03-23 2018-04-20 郑州诚优成电子科技有限公司 Learning type virtual reality eyesight convalescence device
CN107669455A (en) * 2017-11-17 2018-02-09 广州视景医疗软件有限公司 A kind of vision training method, device and equipment
CN108074431A (en) * 2018-01-24 2018-05-25 杭州师范大学 A kind of system and method using VR technologies speech real training
KR101971937B1 (en) * 2018-11-08 2019-04-24 주식회사 휴메닉 Mixed reality-based recognition training system and method for aged people

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111939012A (en) * 2020-07-24 2020-11-17 光朗(海南)生物科技有限责任公司 Visual training method and device
CN115643395A (en) * 2022-12-23 2023-01-24 广州视景医疗软件有限公司 Visual training method and device based on virtual reality

Similar Documents

Publication Publication Date Title
CN106309089B (en) VR vision correction procedure and device
US10064547B2 (en) Means and method for demonstrating the effects of low cylinder astigmatism correction
US9939649B2 (en) Display adjustment methods and head-mounted display devices
US20200306124A1 (en) Method and apparatus for treating diplopia and convergence insufficiency disorder
JP3348956B2 (en) Display device
KR101965393B1 (en) Visual perception training device, method and program for visual perception training using head mounted device
Jordan et al. Effects of facial image size on visual and audio-visual speech recognition
US10376439B2 (en) Audio-feedback computerized system and method for operator-controlled eye exercise
US11774759B2 (en) Systems and methods for improving binocular vision
JP7373800B2 (en) Calibration system and interpupillary calibration method
CN108852766B (en) Vision correction device
US11157073B2 (en) Gaze calibration for eye-mounted displays
CN110897841A (en) Visual training method, visual training device, and storage medium
CN110755241A (en) Visual training method, visual training device, and storage medium
CN115643395A (en) Visual training method and device based on virtual reality
WO2022205789A1 (en) Eyeball tracking method and system based on virtual reality
WO2018120929A1 (en) Clear and smooth image playback control method based on vr glasses
JP3661173B2 (en) Binocular training device
WO2018056791A1 (en) Computing device for providing visual perception training, and method and program, based on head-mounted display device, for providing visual perception training
CN110840721B (en) Near-eye display-based eyesight prevention, control and training method and near-eye display equipment
CN211786414U (en) Virtual reality display system
JP2017217226A (en) Image display device and computer program
Lin et al. Testing visual search performance using retinal light scanning as a future wearable low vision aid
CN113855987B (en) Three-dimensional content sleep aiding method, device, equipment and storage medium
JP3271300B2 (en) Glasses type display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200324