CN113101158A - VR-based binocular video fusion training method and device - Google Patents

VR-based binocular video fusion training method and device Download PDF

Info

Publication number
CN113101158A
CN113101158A CN202110378026.XA CN202110378026A CN113101158A CN 113101158 A CN113101158 A CN 113101158A CN 202110378026 A CN202110378026 A CN 202110378026A CN 113101158 A CN113101158 A CN 113101158A
Authority
CN
China
Prior art keywords
training
target
controlled
image
fixed task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110378026.XA
Other languages
Chinese (zh)
Inventor
袁进
李劲嵘
封檑
李奇威
李子奇
任鸿伦
哈卿
李一鸣
俞益洲
乔昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Original Assignee
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shenrui Bolian Technology Co Ltd, Shenzhen Deepwise Bolian Technology Co Ltd filed Critical Beijing Shenrui Bolian Technology Co Ltd
Priority to CN202110378026.XA priority Critical patent/CN113101158A/en
Publication of CN113101158A publication Critical patent/CN113101158A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H5/00Exercisers for the eyes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H5/00Exercisers for the eyes
    • A61H5/005Exercisers for training the stereoscopic view
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/16Physical interface with patient
    • A61H2201/1602Physical interface with patient kind of interface, e.g. head rest, knee support or lumbar support
    • A61H2201/1604Head
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/16Physical interface with patient
    • A61H2201/1602Physical interface with patient kind of interface, e.g. head rest, knee support or lumbar support
    • A61H2201/165Wearable interfaces
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5007Control means thereof computer controlled
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5023Interfaces to the user
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5023Interfaces to the user
    • A61H2201/5043Displays
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2205/00Devices for specific parts of the body
    • A61H2205/02Head
    • A61H2205/022Face
    • A61H2205/024Eyes

Landscapes

  • Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Epidemiology (AREA)
  • Pain & Pain Management (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The invention provides a VR-based binocular video fusion training method and device, comprising the following steps: independently displaying a first image and a second image in the same virtual training scene for two eyes of a user respectively, wherein the first image comprises a controlled training target, the controlled training target is allowed to move in the virtual training scene, and the second image comprises a fixed task target; receiving path feedback information for controlling the controlled training target to move relative to the fixed task target position after a user observes the position of the fixed task target; and adaptively reconfiguring the controlled training target and the fixed task target according to the path feedback information and displaying the reconfigured controlled training target and the fixed task target. Through the separation display training of the virtual reality scene, the binocular vision fusion training is more interesting, effective and convenient, the compliance is enhanced, the user actively participates in the training, the iteration of the binocular vision fusion training can be formed through the self-adaptive configuration of the vision training content, and the training effect is improved.

Description

VR-based binocular video fusion training method and device
Technical Field
The invention relates to the technical field of visual training, in particular to a binocular vision fusion training method and device based on VR.
Background
The binocular vision fusion function is a function of integrating two images with slight difference from corresponding points of the retinas of the two eyes into a whole object image through brain analysis and processing on the basis that the two eyes have normal simultaneous vision perception. Some people have the binocular fusion function weakened or seriously lost due to strabismus or pathological ametropia and the like, and the binocular fusion function defect can be recovered or improved only by reasonable visual training for the part of people. The existing binocular video fusion training focuses on that two eyes watch on the same training target at the same time, a user easily depends on a single eye in a transition mode in the training process, and due to the fact that a content mechanism of separate display is not available, the function of cooperation between the two eyes is difficult to emphasize, and the training effect is seriously influenced. Secondly, most of the existing binocular vision fusion training is composed of simple training contents, the training scene structure is single, the problems of visual fatigue, inattention and the like are easily caused, and the training effect is poor.
Disclosure of Invention
In view of the above problems, embodiments of the present invention provide a method and an apparatus for training binocular video fusion based on VR, which solve the technical problems that in the existing training process of binocular video fusion, a user is prone to excessively rely on a single eye to complete training, the training effect cannot be achieved, and the binocular video fusion capability is difficult to improve.
In order to solve the technical problems, the invention provides the following technical scheme:
in a first aspect, the present invention provides a VR-based binocular video fusion training method, including:
independently displaying a first image and a second image in the same virtual training scene for two eyes of a user respectively, wherein the first image comprises a controlled training target, the controlled training target is allowed to move in the virtual training scene, and the second image comprises a fixed task target;
receiving path feedback information for controlling the controlled training target to move relative to the fixed task target position after a user observes the position of the fixed task target;
and adaptively reconfiguring the controlled training target and the fixed task target according to the path feedback information and displaying the reconfigured controlled training target and the fixed task target.
In an embodiment of the present invention, the method further includes:
an iterative facilitation process of training is formed by continuously controlling the movement of the controlled training targets relative to the fixed task target positions.
In an embodiment of the present invention, after the step of independently displaying the first image and the second image in the same virtual training scene for both eyes of the user respectively, the method further includes:
and carrying out inhibition adjustment on the whole picture of the first image or the second image in the virtual training scene, so that the virtual training scene is respectively adapted to the eyes of the user.
In an embodiment of the invention, the suppression adjustment comprises a contrast adjustment and/or a brightness adjustment and/or a blur level adjustment.
In an embodiment of the invention, the virtual training scene includes an obstacle, and the controlled training target is controlled to avoid the obstacle in the process of moving relative to the fixed task target; the moving speed of the controlled training target moves at a constant speed at a determined reference speed, and the reference speed is controlled and adjustable; the virtual training system comprises a plurality of fixed task targets, wherein the display positions of the fixed task targets in a virtual training scene are different, and the size of each fixed task target is controlled and adjustable.
In a second aspect, the present invention provides a VR-based binocular fusion training apparatus, comprising:
a training task creation unit: the device comprises a virtual training scene, a first image and a second image, wherein the virtual training scene is used for displaying a first image and a second image independently for two eyes of a user in the same virtual training scene, the first image comprises a controlled training target, the controlled training target is allowed to move in the virtual training scene, and the second image comprises a fixed task target;
path feedback information receiving unit: the path feedback information is used for receiving path feedback information for controlling the controlled training target to move relative to the fixed task target position after the user observes the position of the fixed task target;
an adaptive configuration unit: and the path feedback module is used for adaptively reconfiguring the controlled training target and the fixed task target according to the path feedback information and displaying the reconfigured controlled training target and the fixed task target.
In an embodiment of the present invention, the method further includes:
an iterative training promotion unit: an iterative facilitation process of training is formed by continuously controlling the movement of the controlled training targets relative to the fixed task target positions.
In an embodiment of the present invention, the method further includes:
a binocular suppressing unit: the virtual training scene suppression method is used for performing suppression adjustment on the whole picture of the first image or the second image in the virtual training scene, so that the virtual training scene is respectively adaptive to the eyes of a user.
In an embodiment of the invention, the suppression adjustment in the binocular suppression unit comprises a contrast adjustment and/or a brightness adjustment and/or a blur degree adjustment.
In an embodiment of the present invention, the virtual training scene in the training task creating unit includes an obstacle, and the controlled training target is controlled to avoid the obstacle in a moving process relative to the fixed task target; the moving speed of the controlled training target moves at a constant speed at a determined reference speed, and the reference speed is controlled and adjustable; the virtual training system comprises a plurality of fixed task targets, wherein the display positions of the fixed task targets in a virtual training scene are different, and the size of each fixed task target is controlled and adjustable.
In a third aspect, the present invention provides an electronic device, comprising:
a processor, a memory, an interface to communicate with a gateway;
the memory is used for storing programs and data, and the processor calls the programs stored in the memory to execute the VR-based binocular fusion training method provided by any one of the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium containing a program which, when executed by a processor, is configured to perform a VR-based binocular fusion training method provided in any one of the first aspect.
From the above description, it can be seen that the embodiments of the present invention provide a VR-based binocular vision fusion training method and apparatus. According to the invention, different images are independently displayed for the two eyes of the user, and the user forms a brain fusion image through the observation of the two eyes, so that the cooperation between the two eyes can be fully adjusted. The binocular motion control ability, focusing ability and coordination ability are trained by controlling the controlled target. By controlling the path of the moving object, the sense of spatial direction is improved. Through the separated display training of the virtual reality scene, the binocular vision fusion training can be performed more interestingly, effectively and conveniently, and the compliance is enhanced, so that the user actively participates in the training. By configuring the visual training content in a self-adaptive manner, the iteration of binocular visual fusion training can be formed, so that the training effect is improved.
Drawings
FIG. 1 is a schematic flow chart of a VR-based binocular vision fusion training method according to the present invention;
FIG. 2 is a schematic diagram illustrating a principle of fusion image formation in a VR-based binocular vision fusion training method according to the present invention;
FIG. 3 is a schematic diagram illustrating a moving path of a controlled training target during a training process according to the VR-based binocular video fusion training method of the present invention;
FIG. 4 is a diagram illustrating the effect of fusion image formation under the situation of no inhibition in the VR-based binocular video fusion training method according to the present invention;
FIG. 5 is a diagram illustrating the effect of fused image formation after the suppression in the VR-based binocular video fusion training method according to the present invention;
FIG. 6 is a schematic structural diagram of a VR-based binocular vision fusion training apparatus according to the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and more obvious, the present invention is further described below with reference to the accompanying drawings and the detailed description. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Based on the disadvantages of the prior art, an embodiment of the present invention provides a specific implementation manner of a VR-based binocular video fusion training method, and as shown in fig. 1, the method specifically includes:
and S110, independently displaying a first image and a second image in the same virtual training scene for the eyes of the user respectively, wherein the first image comprises a controlled training target, the controlled training target is allowed to move in the virtual training scene, and the second image comprises a fixed task target.
In particular, the user may need to wear a VR device during the binocular fusion training, and the VR device worn by the user during the activity may be allowed to control brightness and other image parameters. The VR device is any type of VR device that can be worn on the eyes of a user, such as VR smart glasses, VR helmets, etc.
The display of the VR device may display the first image and the second image independently to each eye of the user, may display the first image to the left eye of the user to display the second image to the right eye of the user, and may also display the second image to the left eye of the user to display the first image to the left eye of the user. The first image and the second image have the same background (i.e. are in the same virtual training scene), it being understood that the virtual training scene may be static, and only the controlled training objects in the first image are movable, in which case the resulting training process is relatively single. To enhance the interaction with the user during the training process, the virtual training scenario may also be dynamic, and may change as the controlled training target moves.
The first image and the second image both contain different objects, such as a controlled training target included in the first image, the controlled training target being movable in the virtual training scene; the second image comprises a fixed task target, the position of the fixed task target after being displayed in the virtual training scene is relatively fixed, the position of the fixed task target is required to be passed through or contacted by the controlled training target, and the fixed task target can be a virtual reward article or other virtual object special effects for enhancing user interaction (for example, when the controlled training target is contacted with the fixed task target, the position can display an animation special effect with determined duration).
And S120, receiving path feedback information for controlling the controlled training target to move relative to the fixed task target position after the user observes the position of the fixed task target.
Specifically, the first image and the second image, which are independently displayed for the two eyes of the user, form a fused image in the brain after the two eyes of the user are identified, and the final effect is as shown in fig. 2. This fused image contains the same portion of the first and second images and also includes all elements in different portions of the first and second images. The two eyes of the user are required to cooperate with each other, the user controls the controlled training target in the first image to move to the fixed task target position in the second image through an external input device (such as a VR handle, a keyboard, a mouse and the like) (as shown in fig. 3), the path of the controlled training target moving to the fixed task target is used as path feedback information, and the path feedback information is transmitted to the VR device through the external input device.
And S130, adaptively reconfiguring the controlled training target and the fixed task target according to the path feedback information and displaying the reconfigured controlled training target and the reconfigured fixed task target.
Specifically, the strength of the binocular video fusion function of different users is different, so that the path feedback information formed after the user operation is different from the real path quantized by the system, that is, the controlled training target cannot accurately reach the position of the fixed task target. When the controlled training target cannot reach the fixed task target position, the previously configured controlled training target and the fixed task target are reconfigured, the training difficulty is reduced by reducing the moving speed of the controlled training target or increasing the size of the fixed task target, and the training difficulty is properly increased when a user can finish the training with lower difficulty to achieve the purpose of improving the binocular vision fusion capability. Otherwise, the training difficulty is continuously improved, so that the capability of the user for fusing the two-eye videos is further improved.
In this embodiment, different images are independently displayed for both eyes of the user, and the user forms a brain fusion image by observing both eyes, so that the coordination between both eyes can be fully adjusted. The binocular motion control ability, focusing ability and coordination ability are trained by controlling the controlled target. By controlling the path of the moving object, the sense of spatial direction is improved. Through the separated display training of the virtual reality scene, the binocular vision fusion training can be performed more interestingly, effectively and conveniently, and the compliance is enhanced, so that the user actively participates in the training. By configuring the visual training content in a self-adaptive manner, the iteration of binocular visual fusion training can be formed, so that the training effect is improved.
On the basis of the above embodiment, an embodiment of the present invention further includes the following steps:
and S140, forming an iterative promotion process of training by continuously controlling the controlled training target to move relative to the fixed task target.
Although the training through steps S110 to S130 may have a certain training effect, quantization of the training cannot be achieved. In this embodiment, the training continues to form an iterative facilitation process of the training, regardless of whether the user is finished in the binocular fusion training process. The user utilizes the iteration promotion process to continuously exercise the capacity of binocular vision fusion to form the quantification of binocular vision fusion training, and further the upper limit of the binocular vision fusion capacity is improved, so that the aim of training binocular vision fusion is fulfilled.
On the basis of the above embodiment, in an embodiment of the present invention, after S110, the method further includes the following steps:
s150, carrying out inhibition adjustment on the whole picture of the first image or the second image in the virtual training scene, so that the virtual training scene is respectively adapted to the eyes of the user.
Specifically, it is understood that each person will have a dominant eye, which may be the left eye or the right eye, and the image observed by the dominant eye is preferentially analyzed and processed by the brain, so as to weaken the image observed by the non-dominant eye, which results in a more blurred image observed by the non-dominant eye, and thus some graphic elements of the fused image are indeed or cannot be distinguished. The discrimination between dominant and non-dominant eyes can be done under the test of a physician. For some users with weak binocular vision fusion capability, participation in binocular vision fusion training may have certain difficulty, as shown in fig. 4, a controlled training target in a fusion image presented by a brain of the user is relatively virtual, so that training cannot be performed.
In the actual use process, the virtual training scene and the first image or the virtual training scene and the second image are displayed for the dominant eye, and the virtual training scene and the second image or the virtual training scene and the first image are displayed for the non-dominant eye (namely, the first image and the virtual training scene are displayed simultaneously, the second image and the virtual training scene are displayed simultaneously, the first image or the second image is displayed separately for the dominant eye and the non-dominant eye, namely, the images displayed by the dominant eye and the non-dominant eye are different, the second image is displayed for the non-dominant eye when the first image is displayed for the dominant eye, and the first image is displayed for the non-dominant eye when the second image is displayed for the dominant eye).
The inhibition adjustment is performed under the condition that the binocular vision fusion capability of the user is weak, the image displayed by the dominant eye can be inhibited in the specific training process to reduce the information of the acquired vision of the dominant eye, so that the information of the acquired vision of the non-dominant eye is enhanced, and the information of the acquired vision of the non-dominant eye can also be enhanced through the anti-inhibition, so that the user can obtain the capability of binocular vision fusion to see the fused image clearly (namely shown in figure 5), and the training can be better adapted.
In this embodiment, for a patient who has low binocular vision fusion ability and cannot be trained, by suppressing or counter-suppressing the whole image displayed by the dominant eye or the non-dominant eye of the user, the user can see a clear fusion image by an external means to obtain a second vision function, and the problem that the user cannot perform binocular vision fusion training is solved.
On the basis of the above-described embodiments, in an embodiment of the present invention, the suppression adjustment specifically includes contrast adjustment and/or brightness adjustment and/or blur degree adjustment.
Specifically, it is understood that the external means of suppressing accommodation enables the user to obtain the second visual function, i.e., binocular fusion ability, during the training process in step S150. The suppression adjustment process can be realized by adjusting parameters of contrast, brightness and ambiguity. More specifically, one of the three components can be adjusted in the specific adjusting process, or two of the three components can be adjusted in a combined manner, or the three components can be adjusted simultaneously. It should be noted that, in the contrast adjustment process, the contrast parameter is set in the range of 0% to 100%, which is mainly used for enhancing the non-dominant eye vision, and certainly can also be used for dominant eye inhibition; in the brightness adjustment process, the brightness adjustment is relatively universal, the brightness parameter is set in the range of 0% -100%, the main user enhances the display of the non-dominant eye, and the brightness parameter can be used for inhibiting the dominant eye, but the brightness parameter is not usually used as the optimal selection; in the blur degree adjustment process, the parameter of the blur degree is set in the range of 0 to 50, which is mainly used for the suppression of the dominant eye.
In the embodiment, in the binocular video fusion training process, the binocular video fusion capability of the user can be reproduced by adjusting the parameters of the contrast, the brightness and the fuzziness, the adjusting parameters are less, the adjustment is convenient, the preparation work before training can be saved, and the training efficiency is improved.
In an embodiment of the invention, the virtual training scene comprises an obstacle, and the controlled training target is controlled to avoid the obstacle in the process of moving relative to the fixed task target.
Specifically, the obstacles in the virtual training scene may be virtual trees, mountains, clouds, and other objects, and the training difficulty level may be controlled by controlling the number of the obstacles. The controlled training targets are controlled by the user through an external input device, and the user needs to control the controlled training targets to avoid obstacles to contact or pass through the positions of the fixed task targets in the training process.
The moving speed of the controlled training target moves at a constant speed according to the determined reference speed, and the reference speed is controlled and adjustable.
Specifically, the controlled training target is not a controlled drag, and has a certain reference moving speed, which determines the difficulty of training. The moving speed of the controlled object is determined by the length of time the controlled object takes to determine the distance. The training difficulty is lower when the moving speed of the controlled target is slower, and the training difficulty is higher when the moving speed of the controlled target is faster.
The fixed task targets are multiple, the display positions of the fixed task targets in the virtual training scene are different, and the size of each fixed task target is controlled and adjustable.
Specifically, a plurality of fixed task targets are set, and each fixed task target is dispersed in the virtual training scene. The controlled training target needs to contact fixed task targets dispersed in the virtual training scene to train the fitting default degree of the eyes. It will of course be appreciated that the size of the fixed task object may be reduced in order to increase the difficulty of training.
In this embodiment, the training difficulty can be adaptively adjusted according to the feedback information of the receiving path conveniently by changing the number of obstacles in the virtual training environment or changing the moving speed of the controlled training target or changing the size of the fixed task target. The adjustment of the training difficulty can be achieved in a variety of ways, making the training process more optimal.
Based on the same inventive concept, the embodiment of the present application further provides a VR-based binocular vision fusion training device, which can be used to implement the VR-based binocular vision fusion training method described in the foregoing embodiment, as described in the following embodiments. Because the principle of solving the problems of the VR-based binocular vision fusion training device is similar to the VR-based binocular vision fusion training method, the implementation of the VR-based binocular vision fusion training device can be implemented by the method, and repeated parts are not described any more. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. While the system described in the embodiments below is preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
The invention provides a VR-based binocular vision fusion training device, as shown in FIG. 6. In fig. 6, the apparatus includes:
training task creation unit 210: the system comprises a virtual training scene, a first image and a second image, wherein the virtual training scene is used for displaying a first image and a second image independently for two eyes of a user in the same virtual training scene, the first image comprises a controlled training target, the controlled training target is allowed to move in the virtual training scene, and the second image comprises a fixed task target;
the path feedback information receiving unit 220: the path feedback information is used for receiving path feedback information for controlling the controlled training target to move relative to the fixed task target position after the user observes the position of the fixed task target;
the adaptive configuration unit 230: and the path feedback module is used for adaptively reconfiguring the controlled training target and the fixed task target according to the path feedback information and displaying the reconfigured controlled training target and the fixed task target.
In an embodiment of the present invention, the method further includes:
the iterative training facilitation unit 240: an iterative facilitation process of training is formed by continuously controlling the controlled training target to move relative to the fixed task target position.
In an embodiment of the present invention, the method further includes:
binocular suppressing unit 250: the method is used for inhibiting and adjusting the whole picture of the first image or the second image in the virtual training scene, so that the virtual training scene is respectively adapted to the eyes of a user.
In an embodiment of the present invention, the suppression adjustment in the binocular suppressing unit 250 includes contrast adjustment and/or brightness adjustment and/or blur adjustment.
In an embodiment of the present invention, the virtual training scene in the training task creating unit 210 includes an obstacle, and the controlled training target is controlled to avoid the obstacle in the moving process relative to the fixed task target; the moving speed of the controlled training target moves at a constant speed at a determined reference speed, and the reference speed is controlled and adjustable; the fixed task targets are multiple, the display positions of the fixed task targets in the virtual training scene are different, and the size of each fixed task target is controlled and adjustable.
An embodiment of the present application further provides a specific implementation manner of an electronic device capable of implementing all steps in a VR-based binocular video fusion training method in the foregoing embodiment, and referring to fig. 7, the electronic device 300 specifically includes the following contents:
a processor 310, a memory 320, a communication unit 330, and a bus 340;
the processor 310, the memory 320 and the communication unit 330 complete communication with each other through the bus 340; the communication unit 330 is used for implementing information transmission between server-side devices and terminal devices and other related devices.
The processor 310 is used to call the computer program in the memory 320, and the processor executes the computer program to implement all the steps of the method in the above-described embodiments.
Those of ordinary skill in the art will understand that: the Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory is used for storing programs, and the processor executes the programs after receiving the execution instructions. Further, the software programs and modules within the aforementioned memories may also include an operating system, which may include various software components and/or drivers for managing system tasks (e.g., memory management, storage device control, power management, etc.), and may communicate with various hardware or software components to provide an operating environment for other software components.
The processor may be an integrated circuit chip having signal processing capabilities. The processor may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The present application further provides a computer-readable storage medium comprising a program which, when executed by a processor, is configured to perform a VR-based binocular fusion training method provided by any of the preceding method embodiments.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media capable of storing program codes, such as ROM, RAM, magnetic or optical disk, etc., and the specific type of media is not limited in this application.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A VR-based binocular video fusion training method is characterized by comprising the following steps:
independently displaying a first image and a second image in the same virtual training scene for two eyes of a user respectively, wherein the first image comprises a controlled training target, the controlled training target is allowed to move in the virtual training scene, and the second image comprises a fixed task target;
receiving path feedback information for controlling the controlled training target to move relative to the fixed task target position after a user observes the position of the fixed task target;
and adaptively reconfiguring the controlled training target and the fixed task target according to the path feedback information and displaying the reconfigured controlled training target and the fixed task target.
2. The VR-based binocular video fusion training method of claim 1, further comprising:
an iterative facilitation process of training is formed by continuously controlling the movement of the controlled training targets relative to the fixed task target positions.
3. The VR-based binocular video fusion training method of claim 1, further comprising, after the step of independently displaying the first image and the second image for both eyes of the user in the same virtual training scene:
and carrying out inhibition adjustment on the whole picture of the first image or the second image in the virtual training scene, so that the virtual training scene is respectively adapted to the eyes of the user.
4. The VR-based binocular vision fusion training method of claim 3, wherein the inhibition adjustments include contrast adjustments and/or brightness adjustments and/or blur adjustments.
5. The VR-based binocular vision fusion training method of claim 1, wherein the virtual training scenario includes obstacles, and the controlled training targets are controlled to avoid the obstacles while moving relative to the fixed task targets; the moving speed of the controlled training target moves at a constant speed at a determined reference speed, and the reference speed is controlled and adjustable; the virtual training system comprises a plurality of fixed task targets, wherein the display positions of the fixed task targets in a virtual training scene are different, and the size of each fixed task target is controlled and adjustable.
6. A VR-based binocular fusion training device, the device comprising:
a training task creation unit: the device comprises a virtual training scene, a first image and a second image, wherein the virtual training scene is used for displaying a first image and a second image independently for two eyes of a user in the same virtual training scene, the first image comprises a controlled training target, the controlled training target is allowed to move in the virtual training scene, and the second image comprises a fixed task target;
path feedback information receiving unit: the path feedback information is used for receiving path feedback information for controlling the controlled training target to move relative to the fixed task target position after the user observes the position of the fixed task target;
an adaptive configuration unit: and the path feedback module is used for adaptively reconfiguring the controlled training target and the fixed task target according to the path feedback information and displaying the reconfigured controlled training target and the fixed task target.
7. The VR-based binocular fusion training apparatus of claim 6, further comprising:
an iterative training promotion unit: an iterative facilitation process for training by continuously controlling the controlled training objects to move relative to the fixed task object positions.
8. The VR-based binocular fusion training apparatus of claim 6, further comprising:
a binocular suppressing unit: the virtual training scene suppression method is used for performing suppression adjustment on the whole picture of the first image or the second image in the virtual training scene, so that the virtual training scene is respectively adaptive to the eyes of a user.
9. An electronic device, characterized in that the device comprises:
a processor, a memory, an interface to communicate with a gateway;
the memory is used for storing programs and data, and the processor calls the programs stored in the memory to execute the VR-based binocular fusion training method of any one of claims 1 to 5.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a program which, when executed by a processor, is adapted to perform a VR-based binocular fusion training method as recited in any one of claims 1-5.
CN202110378026.XA 2021-04-08 2021-04-08 VR-based binocular video fusion training method and device Pending CN113101158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110378026.XA CN113101158A (en) 2021-04-08 2021-04-08 VR-based binocular video fusion training method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110378026.XA CN113101158A (en) 2021-04-08 2021-04-08 VR-based binocular video fusion training method and device

Publications (1)

Publication Number Publication Date
CN113101158A true CN113101158A (en) 2021-07-13

Family

ID=76715153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110378026.XA Pending CN113101158A (en) 2021-04-08 2021-04-08 VR-based binocular video fusion training method and device

Country Status (1)

Country Link
CN (1) CN113101158A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115836961A (en) * 2022-12-19 2023-03-24 广州视景医疗软件有限公司 Stereoscopic vision training method, device and equipment
CN116807849A (en) * 2023-06-20 2023-09-29 广州视景医疗软件有限公司 Visual training method and device based on eye movement tracking

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4756305A (en) * 1986-09-23 1988-07-12 Mateik William J Eye training device
JPH08206166A (en) * 1995-02-03 1996-08-13 Sony Corp Method and apparatus for training visual function of both eyes
JPH0951927A (en) * 1995-08-11 1997-02-25 Sony Corp Binocular vision training device
US20060087618A1 (en) * 2002-05-04 2006-04-27 Paula Smart Ocular display apparatus for assessment and measurement of and for treatment of ocular disorders, and methods therefor
CN107088145A (en) * 2017-04-25 2017-08-25 深圳职业技术学院 Visual function training method and system
CN108478399A (en) * 2018-02-01 2018-09-04 上海青研科技有限公司 A kind of amblyopia training instrument
CN108478401A (en) * 2018-03-06 2018-09-04 大陆视觉(北京)眼镜销售有限公司 Amblyopia training rehabilitation system and method based on VR technologies
CN109645953A (en) * 2019-01-25 2019-04-19 北京十维度科技有限公司 Vision-based detection and training method, device and VR equipment
CN109758107A (en) * 2019-02-14 2019-05-17 郑州诚优成电子科技有限公司 A kind of VR visual function examination device
CN110300567A (en) * 2016-12-15 2019-10-01 埃登卢克斯公司 For improving the vision training apparatus of fusion function
CN110433062A (en) * 2019-08-14 2019-11-12 沈阳倍优科技有限公司 A kind of visual function training system based on dynamic video image
CN111202663A (en) * 2019-12-31 2020-05-29 浙江工业大学 Vision training learning system based on VR technique
CN111770745A (en) * 2017-12-12 2020-10-13 埃登卢克斯公司 Visual training device for fusing vergence and spatial visual training
CN112466431A (en) * 2020-11-25 2021-03-09 天津美光视能电子科技有限公司 Training control method and device based on visual training and electronic equipment

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4756305A (en) * 1986-09-23 1988-07-12 Mateik William J Eye training device
JPH08206166A (en) * 1995-02-03 1996-08-13 Sony Corp Method and apparatus for training visual function of both eyes
JPH0951927A (en) * 1995-08-11 1997-02-25 Sony Corp Binocular vision training device
US20060087618A1 (en) * 2002-05-04 2006-04-27 Paula Smart Ocular display apparatus for assessment and measurement of and for treatment of ocular disorders, and methods therefor
CN110300567A (en) * 2016-12-15 2019-10-01 埃登卢克斯公司 For improving the vision training apparatus of fusion function
CN107088145A (en) * 2017-04-25 2017-08-25 深圳职业技术学院 Visual function training method and system
CN111770745A (en) * 2017-12-12 2020-10-13 埃登卢克斯公司 Visual training device for fusing vergence and spatial visual training
CN108478399A (en) * 2018-02-01 2018-09-04 上海青研科技有限公司 A kind of amblyopia training instrument
CN108478401A (en) * 2018-03-06 2018-09-04 大陆视觉(北京)眼镜销售有限公司 Amblyopia training rehabilitation system and method based on VR technologies
CN109645953A (en) * 2019-01-25 2019-04-19 北京十维度科技有限公司 Vision-based detection and training method, device and VR equipment
CN109758107A (en) * 2019-02-14 2019-05-17 郑州诚优成电子科技有限公司 A kind of VR visual function examination device
CN110433062A (en) * 2019-08-14 2019-11-12 沈阳倍优科技有限公司 A kind of visual function training system based on dynamic video image
CN111202663A (en) * 2019-12-31 2020-05-29 浙江工业大学 Vision training learning system based on VR technique
CN112466431A (en) * 2020-11-25 2021-03-09 天津美光视能电子科技有限公司 Training control method and device based on visual training and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115836961A (en) * 2022-12-19 2023-03-24 广州视景医疗软件有限公司 Stereoscopic vision training method, device and equipment
CN115836961B (en) * 2022-12-19 2023-12-26 广州视景医疗软件有限公司 Stereoscopic vision training method, device and equipment
CN116807849A (en) * 2023-06-20 2023-09-29 广州视景医疗软件有限公司 Visual training method and device based on eye movement tracking
CN116807849B (en) * 2023-06-20 2024-05-03 广州视景医疗软件有限公司 Visual training method and device based on eye movement tracking

Similar Documents

Publication Publication Date Title
US11287884B2 (en) Eye tracking to adjust region-of-interest (ROI) for compressing images for transmission
CN106484116B (en) The treating method and apparatus of media file
EP3757727B1 (en) Image re-projection for foveated rendering
CN106412563A (en) Image display method and apparatus
TWI669635B (en) Method and device for displaying barrage and non-volatile computer readable storage medium
CN113101158A (en) VR-based binocular video fusion training method and device
US10885651B2 (en) Information processing method, wearable electronic device, and processing apparatus and system
CN107744451B (en) Training device for binocular vision function
CN108596106A (en) Visual fatigue recognition methods and its device, VR equipment based on VR equipment
CN113552947B (en) Virtual scene display method, device and computer readable storage medium
CN109951642B (en) Display method, display device, electronic apparatus, and storage medium
CN113101159B (en) VR-based stereoscopic vision training and evaluating method and device
CN106200908B (en) A kind of control method and electronic equipment
CN106406501A (en) Method and device for controlling rendering
JP2023515205A (en) Display method, device, terminal device and computer program
CN115423989A (en) Control method and component for AR glasses picture display
US10417811B2 (en) Recording medium, information processing apparatus, and control method
CN111417918B (en) Method for rendering a current image on a head-mounted display, corresponding device, computer program product and computer-readable carrier medium
CN116850012B (en) Visual training method and system based on binocular vision
CN105513113B (en) A kind of image processing method and electronic equipment
JPWO2018150711A1 (en) Display control device, display control device control method, and control program
CN114070956B (en) Image fusion processing method, system, equipment and computer readable storage medium
CN118092626A (en) Display control method and device and head-mounted display equipment
CN106034233B (en) Information processing method and electronic equipment
CN117891330A (en) Screen display control method and device of intelligent glasses, computer equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210713