CN115590733A - Vision training method and device - Google Patents

Vision training method and device Download PDF

Info

Publication number
CN115590733A
CN115590733A CN202110771755.1A CN202110771755A CN115590733A CN 115590733 A CN115590733 A CN 115590733A CN 202110771755 A CN202110771755 A CN 202110771755A CN 115590733 A CN115590733 A CN 115590733A
Authority
CN
China
Prior art keywords
virtual image
vision
range
motion trail
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110771755.1A
Other languages
Chinese (zh)
Inventor
吴巨帅
李江
高少锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110771755.1A priority Critical patent/CN115590733A/en
Publication of CN115590733A publication Critical patent/CN115590733A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H5/00Exercisers for the eyes
    • A61H5/005Exercisers for training the stereoscopic view
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Pain & Pain Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Epidemiology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Rehabilitation Tools (AREA)

Abstract

A vision training method and a device are used for solving the problem that the display equipment in the prior art cannot prevent and/or correct pseudomyopia. The method is applicable to a display device comprising a display assembly for displaying an image of a target and an optical imaging assembly comprising a solid state zoom assembly for forming the image into a virtual image; the method comprises the steps of obtaining vision parameters of a user, and obtaining a first adjusting range of a virtual image according to the vision parameters and the corresponding relation between the vision parameters and the position of the virtual image; adjusting the focal length of the solid zoom component to move the virtual image based on the first adjustment range to obtain a first motion track of the virtual image; and training and evaluating the vision of the user according to the acquired first motion trail of the binocular viewpoint and the acquired first motion trail of the virtual image. The movement of the target image can be realized by changing the focal length of the solid-state zoom component, and the binocular can achieve the effect of preventing and/or correcting the pseudomyopia in the process of tracking the movement of the target image.

Description

Vision training method and device
Technical Field
The application relates to the technical field of vision training, in particular to a vision training method and device.
Background
Myopia, which is a phenomenon in which parallel rays of light are focused in front of the retina through an eye dioptric system in a state where the lens is relaxed in accommodation, has become an important problem affecting human health. Myopia can be divided into pseudomyopia and true myopia, pseudomyopia is the crystalline lens that loses the accommodation ability because of ciliary muscle spasm and other reasons, and the ciliary muscle can relax through training and the like, so that the treatment effect is achieved. If myopia can be prevented and/or pseudomyopia can be corrected through the head-mounted display device, the application scene of the head-mounted display can be expanded.
A commonly used head-mounted display device is, for example, an Augmented Reality (AR) device or a Virtual Reality (VR) device. Both the AR device and the VR device can combine virtual information and information of the real world with each other, that is, entity information (such as visual information, sound, or touch) in the time space range of the real world, which is difficult to experience, can be superimposed onto the real world after simulation by a computer or the like.
In summary, how to implement the AR device or the VR device with the function of preventing and/or correcting pseudomyopia is a technical problem that needs to be solved at present.
Disclosure of Invention
The application provides a vision training method and device, which are used for realizing that display equipment has the function of preventing and/or correcting pseudomyopia.
In a first aspect, the present application provides a vision training method, which is applicable to a display device, where the display device includes a display module and an optical imaging module, the display module is configured to display an image of a target, and the optical imaging module is configured to form a virtual image of the image, where the optical imaging module includes a solid-state zoom module. The method comprises the following steps: acquiring vision parameters of a user, and determining a first adjusting range of a virtual image according to the vision parameters and the corresponding relation between the vision parameters and the position of the virtual image; adjusting the focal length of the solid-state zooming assembly to move the virtual image according to the first adjusting range, and determining a first motion trail of the virtual image; and acquiring a first motion trail of a binocular viewpoint of the user, and training and evaluating the vision of the user according to the first motion trail of the binocular viewpoint and the first motion trail of the virtual image.
Based on above-mentioned scheme, move the virtual image through the focus of adjusting solid-state subassembly that zooms, the noise is less when adjusting the focus of solid-state subassembly that zooms, can realize quick random removal virtual image moreover. In the moving process of the virtual image, the user can exercise the ciliary muscles of the two eyes and the corresponding visual system in the moving process of the two eyes tracking the virtual image, so that the display equipment can have the function of preventing and/or correcting pseudomyopia.
In one possible implementation, the solid-state zoom assembly includes a zoom lens, a range of a first electrical signal applied to the zoom lens is determined according to a first adjustment range, and a focal length of the zoom lens is adjusted to move the virtual image by changing the first electrical signal applied to the zoom lens within the range of the first electrical signal.
By changing the electrical signal applied to the zoom lens, the focal length of the zoom lens can be changed, so that moving the virtual image can be achieved.
In one possible implementation, the solid-state zoom assembly includes a deformable mirror, and a range of electrostatic force or a range of electromagnetic force applied to the deformable mirror is determined according to the first adjustment range, and the deformable mirror is driven to deform or displace to move the virtual image within the range of electrostatic force or the range of electromagnetic force.
By changing the range of the electrostatic force or the electromagnetic force applied to the deformable mirror, the deformable mirror can be deformed or displaced, so that the virtual image can be moved.
In one possible implementation manner, the correspondence relationship between the vision parameter and the position of the virtual image satisfies: the difference between the reciprocal of the photopic range of a sighted person and the reciprocal of the photopic range of the user is equal to the focal length of a correction lens, wherein the focal length of the correction lens is related to the vision parameters of the user. If the correcting lens is a concave lens, the focal length of the correcting lens is negative; if the correcting lens is a convex lens, the focal length of the correcting lens is positive.
In a possible implementation manner, if the coincidence degree of the first motion trajectory of the binocular viewpoint and the first motion trajectory of the virtual image is greater than a first threshold value, it is determined that the vision training reaches the standard.
If the contact ratio of the first motion trail of the binocular viewpoint and the first motion trail of the virtual image is greater than the first threshold value, it is indicated that in the moving process of the virtual image, the binocular can move along with the movement of the virtual image under most conditions, so that the training of the ciliary muscles with double purposes is realized, and the display equipment can have the function of preventing and/or correcting pseudomyopia.
In another possible implementation manner, if the coincidence degree of the first motion trajectory of the binocular viewpoint and the first motion trajectory of the virtual image is not greater than the first threshold, the first adjustment range of the virtual image is narrowed.
If the coincidence degree of the first motion trail of the binocular viewpoint and the first motion trail of the virtual image is not greater than the first threshold value, it indicates that the binocular cannot track the movement of the virtual image in time under most conditions, and therefore the movement of the virtual image can be tracked as well as possible by reducing the first adjustment range of the virtual image.
In one possible implementation, the display assembly includes a display screen; the method further comprises the following steps: moving the virtual image within a second adjustment range along the upper and lower boundaries of the display screen, and determining a second motion trajectory of the virtual image; and obtaining a second motion track of the binocular viewpoints, and if the contact ratio of the second motion track of the binocular viewpoints and the second motion track of the virtual image is greater than a second threshold value, determining that the vision training reaches the standard.
In one possible implementation, the display assembly includes a display screen; the method further comprises the following steps: moving the virtual image within a third adjustment range along the left and right boundaries of the display screen, and determining a third motion trajectory of the virtual image; and acquiring a third motion track of the binocular viewpoints, and determining that the eyesight training reaches the standard if the coincidence degree of the third motion track of the binocular viewpoints and the third motion track of the virtual image is greater than a third threshold value.
Through removing the virtual image in the second control range along the upper and lower border of display screen, and/or, remove the virtual image in the third control range along the left and right borders of display screen, can reach and train binocular ciliary muscle at the second control range along the upper and lower border of display screen and/or the third control range along the left and right borders of display screen to help further prevention and/or correct pseudomyopia.
In one possible implementation, the virtual images may be moved in a progressive manner, or they may be moved in a random manner.
In a second aspect, the present application provides a vision training apparatus for implementing any one of the above-mentioned first aspect or the first aspect, comprising corresponding functionalities for implementing the steps of the above methods, respectively. The functions may be implemented by hardware, or by hardware executing corresponding software. The hardware or software includes one or more devices corresponding to the above functions.
In one possible implementation manner, the vision training apparatus is applicable to a display device, the display device includes a display component and an optical imaging component, the display component is used for displaying an image of a target, the optical imaging component is used for forming a virtual image of the image, and the optical imaging component includes a solid-state zoom component; the vision training device comprises an acquisition module and a processing module.
The acquisition module is used for acquiring vision parameters of a user; the processing module is used for determining a first adjusting range of the virtual image according to the vision parameters and the corresponding relation between the vision parameters and the position of the virtual image; adjusting the focal length of the solid-state zooming assembly to move the virtual image according to the first adjusting range, and determining a first motion trail of the virtual image; the acquisition module is also used for acquiring a first motion trail of a binocular viewpoint of a user; the processing module is further used for training and evaluating the vision of the user according to the first motion trail of the binocular viewpoint and the first motion trail of the virtual image.
In one possible implementation, the solid state zoom assembly includes a zoom lens; the processing module is specifically configured to: determining a range of the first electric signal applied to the zoom lens according to the first adjustment range; the first electric signal applied to the zoom lens is changed within a range of the first electric signal, and a focal length of the zoom lens is adjusted to move the virtual image.
In one possible implementation, the solid state zoom assembly includes a deformable mirror; the processing module is specifically used for determining the range of the electrostatic force or the range of the electromagnetic force applied to the deformable mirror according to the first adjusting range; and in the range of electrostatic force or electromagnetic force, the deformable mirror is driven to deform or displace to move the virtual image.
In a possible implementation manner, the correspondence relationship between the vision parameter and the position of the virtual image satisfies: the difference between the reciprocal of the photopic vision range of the person with normal vision and the reciprocal of the photopic vision range of the user is equal to the focal length of the correction lens; wherein the focal length of the corrective lens is related to the vision parameters of the user. If the correcting lens is a concave lens, the focal length of the correcting lens is negative; if the correcting lens is a convex lens, the focal length of the correcting lens is positive.
In a possible implementation manner, the processing module is specifically configured to determine that the eyesight training is up to standard if a coincidence degree of the first motion trajectory of the binocular viewpoint and the first motion trajectory of the virtual image is greater than a first threshold.
In one possible implementation, the display assembly includes a display screen; the processing module is further configured to move the virtual image within a second adjustment range along the upper and lower boundaries of the display screen, and determine a second motion trajectory of the virtual image; the acquisition module is also used for acquiring a second motion track of the binocular viewpoint; the processing module is further used for determining that the eyesight training reaches the standard if the coincidence degree of the second motion trail of the binocular viewpoint and the second motion trail of the virtual image is larger than a second threshold value.
In one possible implementation, the display assembly includes a display screen; the processing module is further configured to move the virtual image within a third adjustment range along left and right boundaries of the display screen, and determine a third motion trajectory of the virtual image; the acquisition module is also used for acquiring a third motion trail of the binocular viewpoint; the processing module is further used for determining that the eyesight training reaches the standard if the coincidence degree of the third motion trail of the binocular viewpoint and the third motion trail of the virtual image is greater than a third threshold value.
In a possible implementation manner, the processing module is further configured to narrow the first adjustment range of the virtual image if a coincidence degree of the first motion trajectory of the binocular viewpoint and the first motion trajectory of the virtual image is not greater than a first threshold.
In a possible implementation, the processing module is specifically configured to move the virtual image in a progressive manner, or in a random manner.
For technical effects that can be achieved by the second aspect or any possible implementation manner of the second aspect, reference may be made to the description of the beneficial effects in the first aspect, and details are not repeated here.
In a third aspect, the present application provides a computer-readable storage medium having stored thereon a computer program or instructions which, when executed by a vision training apparatus, cause the vision training apparatus to perform the method of the first aspect or any possible implementation manner of the first aspect.
In a fourth aspect, the present application provides a computer program product comprising a computer program or instructions for implementing the method of the first aspect or any possible implementation manner of the first aspect when the computer program or instructions are executed by a vision training apparatus.
Drawings
Fig. 1a is a schematic structural diagram of a display device provided in the present application;
FIG. 1b is a light path diagram of a display device provided herein;
FIG. 2a is a schematic structural diagram of a liquid crystal lens provided herein;
FIG. 2b is a schematic diagram illustrating a relationship between a voltage applied to a liquid crystal lens and a focal length according to the present application;
FIG. 2c is a schematic structural diagram of another liquid crystal lens provided herein;
FIG. 2d is a schematic structural diagram of another liquid crystal lens provided herein;
FIG. 3a is a schematic diagram of the present application illustrating a change in polarization state of incident light;
FIG. 3b is a schematic diagram of an electrically controlled twisted liquid crystal structure according to the present application for changing the polarization state of incident light;
FIG. 4a is a schematic structural diagram of a liquid lens provided herein;
FIG. 4b is a schematic structural diagram of a liquid lens provided herein;
FIG. 5 is a schematic diagram of a deformable mirror according to the present application;
FIG. 6a is a schematic structural diagram of an optical imaging assembly provided herein;
FIG. 6b is an optical path diagram in an optical imaging assembly provided herein;
FIG. 7 illustrates a vision training method provided herein;
FIG. 8a is a schematic view of a first interface provided herein;
FIG. 8b is a schematic view of another first interface provided herein;
fig. 9a is a schematic diagram of a moving track of a virtual image provided in the present application;
fig. 9b is a schematic diagram of a moving track of another virtual image provided in the present application;
FIG. 10 is a schematic view of a convergence adjustment conflict provided herein;
fig. 11 is a schematic structural diagram of a vision training device provided in the present application.
Detailed Description
The embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Hereinafter, some terms in the present application will be explained. It should be noted that these explanations are for the convenience of those skilled in the art, and do not limit the scope of protection claimed in the present application.
1. Virtual image
Light that the object sent is through refraction or reflection back, and the light path changes, and people's eye sees the light after refraction or reflection, can feel light and come from its reverse extension line crossing point's position, and the crossing image that forms of reverse extension line is exactly the virtual image, and the position at virtual image place is called the position of virtual image, and the plane at virtual image place is called virtual image plane. The distance between the position of the virtual image and the human eyes is the focusing depth. It will be appreciated that the virtual image is located at a position where there is no actual object, nor is there light converging. For example, images formed by a flat mirror and glasses are virtual images.
2. Eyeball tracking device
Eyeball tracking refers to tracking of eyeball motion by measuring the position of the point of regard of the eye or the movement of the eyeball relative to the head. An eyeball tracking device is a device capable of tracking and measuring the position and movement information of an eyeball.
3. Semi-transparent semi-reflecting mirror (semi-transparent and semi-reflective mirror)
The half-mirror may also be referred to as a beam splitter, or a half-mirror. The optical element is an optical element which is formed by plating a semi-reflection film on optical glass or plating a semi-transmission and semi-reflection film on one optical surface of a lens so as to change the original transmission and reflection proportion of an incident beam. The anti-reflection performance can be improved by coating the film layer, and the light intensity is increased; it can also increase the light intensity and decrease the light intensity. The half mirror can transmit and reflect incident light in a ratio of 50. That is, the transmittance and reflectance of the half mirror are 50% each. When the incident light passes through the half mirror, the transmitted light intensity and the reflected light intensity respectively account for 50%.
Based on the above, fig. 1a is a schematic structural diagram of a display device to which the present application is applicable. The display device comprises a display assembly 101 and an optical imaging assembly 102, wherein the display assembly 101 is used for displaying an image of a target, the optical imaging assembly 102 is used for forming a virtual image of the image, and the optical imaging assembly 102 comprises a solid state zooming assembly 1021. Further, optionally, the display device may further comprise a processing control component 103, the processing control component 103 being configured to adjust the focal length of the solid zoom component 1021. Further, optionally, the display device further comprises a memory. Such a display device, which may also include a processing control component and memory, may be referred to as a kiosk. It should be noted that a display device that does not include a process control component and a memory may be referred to as a split machine, and a micro processing unit may be included in the split machine.
Illustratively, the display device may be, for example, a Near Eye Display (NED) device, such as VR glasses, VR headset, AR glasses, or AR headset, or the like. Based on the display device shown in fig. 1a described above, fig. 1b exemplarily shows an imaging optical path diagram of a NED device.
The various functional components and structures shown in FIG. 1a are described separately below to give an exemplary implementation. For convenience of explanation, the display assembly, the optical imaging assembly, the process control assembly, and the solid state zoom assembly are not identified below.
1. Display assembly
In one possible implementation, the display component serves as an image source and can provide display content for the display device, such as an image of a displayable object, such as a geometric figure, a cartoon picture, 3D content, or an interactive picture. That is to say, the light corresponding to the image in the display module is transmitted (for example, refracted) to the human eye through the optical imaging module for imaging, the human eye sees the refracted light, the light will be perceived to come from the position of the intersection point of the reverse extension lines, and the image formed by the intersection of the reverse extension lines is the virtual image.
In one possible implementation, the display component may be a Liquid Crystal Display (LCD), or an Organic Light Emitting Diode (OLED), or a micro light emitting diode (micro-LED), or an active-matrix organic light emitting diode (AMOLED), or a Flexible Light Emitting Diode (FLED), or a quantum dot light emitting diode (QLED).
In yet another possible implementation, the display component may also be a reflective display screen. Such as a Liquid Crystal On Silicon (LCOS) display screen, or a reflective display screen based on a digital micro-mirror device (DMD). The LCOS and the DMD are reflective structures, and therefore, the resolution or the aperture ratio thereof is high.
It should be noted that before the display component displays the image, the display component also needs to perform a rendering operation on the image, for example, the processing control component renders the image first, and the display component may display the rendered image.
2. Optical imaging assembly
In a possible implementation manner, the optical imaging assembly is configured to image the image displayed by the display assembly on a virtual image plane that is a certain distance away from the two eyes of the user, and an image on the virtual image plane is a virtual image. Wherein the optical imaging assembly comprises a solid state zoom assembly. The solid state variable focus component may be, for example, a liquid crystal lens, a liquid lens, or a deformable mirror. The following are described separately.
Three possible liquid crystal lens configurations are exemplarily shown below.
Please refer to fig. 2a, which is a schematic structural diagram of a liquid crystal lens according to the present application. The liquid crystal lens is a common liquid crystal lens, the direction of the long axis of liquid crystal molecules is changed by changing the magnitude of an external electric field to generate optical anisotropy and medium anisotropy, so that a tunable refractive index is obtained, the equivalent phase of the liquid crystal lens can be changed through the tunable refractive index, and the focal length of the liquid crystal lens can be changed by changing the equivalent phase of the liquid crystal lens.
Referring to fig. 2b, as the voltage applied to the conventional lc lens increases, the focal length of the lc lens decreases from 470 millimeters (mm) to 50mm. In other words, the focal length of the general liquid crystal lens decreases as the voltage applied to the general liquid crystal lens increases. Illustratively, the ordinary liquid crystal lens may be an ordinary lens, or may be a fresnel lens implemented with complex electrodes.
Please refer to fig. 2c, which is a schematic structural diagram of another liquid crystal lens provided in the present application. The liquid crystal lens is a reflective Liquid Crystal On Silicon (LCOS), and the direction of the long axis of liquid crystal molecules is changed by changing an external electric signal, so that the refractive index of light passing through the liquid crystal lens can be changed, and the focal length of the liquid crystal lens can be changed.
Please refer to fig. 2d, which is a schematic structural diagram of another liquid crystal lens provided in the present application. The liquid crystal lens is a liquid crystal geometric Phase (PB) lens. Is based on the lens function of the geometric phase generation. Further, alternatively, the liquid crystal PB lens can be classified into two types of active and passive types.
The active liquid crystal PB lens is mainly made of liquid crystal materials in a liquid crystal state, the liquid crystal materials in the liquid crystal state have fluidity, and the direction of the long axis of liquid crystal molecules can be changed by changing an electric signal applied to the liquid crystal PB lens, so that the focal length of the active liquid crystal PB lens is changed.
The passive liquid crystal PB lens is mainly composed of a liquid crystal Polymer material, and can be polymerized by exposure or the like to form a solid Polymer (Polymer), and the focal length of the passive liquid crystal PB lens is changed by changing the polarization state of incident light. For example, parallel light is incident, the focal length of left-handed circularly polarized light is 1m, and the focal length of right-handed polarized light is-1 m, as can be seen in FIG. 3a. Referring to fig. 3b, the polarization state of the incident light can be changed by using an electrically controlled half-wave plate or an electrically controlled Twisted Nematic Liquid Crystal (TNLC). Since the zoom power of the liquid crystal PB lens is discrete, when a moving virtual image is implemented using the liquid crystal PB lens, the moving position of the virtual image is discrete, and the movement of the virtual image can be implemented approximately continuously by stacking a plurality of pieces of liquid crystal PB lenses (as shown in fig. 3 b). For example, if the position adjustment accuracy of the virtual image is 0.25D, and the adjustment range (i.e., adjustment capability) of the virtual image is 0 to 4D, 4D/0.25d =16 virtual images are required, and the virtual image may be combined with 4 passive liquid crystal PB lenses and 4 TNLCs, where the TNLCs are used for adjusting the polarization states, and one TNLC may adjust two polarization states (as shown in fig. 3 a).
Two possible liquid lens configurations are shown below by way of example.
Fig. 4a is a schematic structural diagram of a liquid lens provided by the present application. The shape of the deformable membrane material is changed by changing the electrical signal applied to the liquid lens while injecting or discharging liquid, thereby changing the focal length of the liquid lens.
Fig. 4b is a schematic view of another liquid lens structure provided in the present application. The liquid lens can change the surface shape of an interface between two liquids which are not mutually fused by changing an electric signal applied to the liquid lens by utilizing the principle of electrowetting, thereby changing the focal length of the liquid lens.
One possible configuration of the deformable mirror is shown below by way of example.
Fig. 5 is a schematic structural diagram of a deformable mirror provided in the present application. The deformable reflector can be a discrete or continuous micro-reflector, the micro-reflector is driven to deform or displace by electrostatic force or electromagnetic force, and different reflector types are realized by regulating and controlling the electrostatic force or the electromagnetic force of discrete electrodes, so that the focal length of the deformable reflector is changed. It should be noted that the reflecting surface may be a concave mirror, the curvature of the concave mirror may be adjusted by electrostatic force or electromagnetic force, and the focal lengths of the concave mirrors with different curvatures are different.
Based on the above, as shown in fig. 6a, a schematic structural diagram of an optical imaging assembly provided by the present application is shown. The optical imaging component sequentially comprises a solid zoom component, a polaroid, a first 1/4 wave plate, a semi-transparent semi-reflecting mirror, a second 1/4 wave plate and a reflection polaroid along the direction of an optical axis. Based on the structure of the optical imaging assembly shown in fig. 6a, the optical path can be seen in fig. 6b. The solid zoom assembly serves to transmit light rays for forming an image from the display assembly to the polarizing plate, and also serves to change the position of the virtual image by an electric signal or the like. The polarizer is used to filter the polarization state of light from the solid state zoom assembly to the same polarization state (i.e., referred to as the first linearly polarized light), for example, to horizontally or vertically linearly polarized light, which may be absorptive or reflective. The first linearly polarized light may be, for example, P-polarized light or S-polarized light. The first 1/4 wave plate is used for converting the first linear polarized light from the polarizing plate into first circularly polarized light and transmitting the first circularly polarized light to the half-mirror. The half mirror is used for transmitting the first circularly polarized light from the first 1/4 wave plate to the second 1/4 wave plate; the second 1/4 wave plate is used for converting the received first circular polarized light into second linear polarized light, and the polarization direction of the second linear polarized light is the same as that of the first linear polarized light; the reflective polarizer is used for reflecting the second linear polarization light from the second 1/4 wave plate to the second 1/4 wave plate; the second 1/4 wave plate is further configured to convert the received second-line polarized light into second circular polarized light, where a rotation direction of the second circular polarized light is the same as a rotation direction of the first circular polarized light, and fig. 6b illustrates left-handed circular polarized light; the half-transmitting and half-reflecting lens is also used for reflecting the second circular polarized light from the second 1/4 wave plate into third circular polarized light, and the rotating direction of the third circular polarized light is opposite to that of the second circular polarized light; the second 1/4 wave plate is also used for converting the third circular polarization light from the half-mirror into third line polarization light; the reflective polarizer is also used to transmit the third line of polarized light to the user's binocular forming an image.
Further, optionally, one or more aberration compensating lenses may also be included in the optical imaging assembly. The aberration compensating lens may be used for aberration compensation. For example, it can be used to compensate for spherical aberration, coma, astigmatism, distortion, chromatic aberration, etc. during imaging of a spherical or aspherical lens. The aberration compensating lens may be located anywhere in the imaging optics as described in figure 6 a. Fig. 6a illustrates an example including an aberration compensation lens 1 and an aberration compensation lens 2, where the aberration compensation lens 1 is located between the solid zoom component and the display component, and the aberration compensation lens 2 is located between the reflective polarizer and the user's eyes. The aberration compensation lens can be a single spherical lens or an aspheric lens, or a combination of multiple spherical or aspheric lenses, wherein the combination of the multiple spherical or aspheric lenses can improve the imaging quality of the system and reduce the aberration of the system. The material of the aberration compensating lens may be an optical resin. The materials of the aberration compensation lens 1 and the aberration compensation lens 2 may be the same or different.
The optical imaging assembly may be referred to as a Pancake (Pancake) folded light path architecture. The light-out imaging assembly can reduce the thickness of the display device, thereby obtaining a light and thin display device.
It should be noted that the structure of the optical imaging assembly shown in fig. 6a is merely an example. In this application, the structure of the optical imaging module may also include only the solid-state zoom module, or may also be a combination of the solid-state zoom module and other structures, which is not limited in this application. In addition, the solid-state zoom assembly in the present application may be located at any position between the display assembly and the user's eyes, and the position of the solid-state zoom assembly shown in fig. 6a described above is merely an example.
It should be noted that the half mirror in fig. 6a may be other lenses that can implement both reflection and lens. For example, the reflectivity and transmissivity may be selected according to specific requirements, for example, the reflectivity may be higher than 50% and the transmissivity lower than 50%; alternatively, the reflectance may be lower than 50% and the transmittance may be lower than 50%. In addition, the solid state zoom assembly can be located at any position among the polarizer, the first 1/4 wave plate, the half-mirror, the second 1/4 wave plate and the reflective polarizer, which is not limited in the present application.
3. Process control assembly
In one possible implementation, the process control component may be used to change the focal length of the solid state zoom component. In combination with the possible structure of the solid zoom component, if the solid zoom component is the liquid crystal lens shown in fig. 2a, the liquid crystal lens shown in fig. 2c, the active liquid crystal PB lens shown in fig. 2d, the liquid lens shown in fig. 4a, or the liquid lens shown in fig. 4b, the processing and control component can be used to control an electrical signal, such as a voltage signal or a current signal, applied to the solid zoom component to change the focal length of the solid zoom component. If the solid state zoom component is a passive liquid crystal PB lens as described above in fig. 2d, the process control component can be used to control the polarization state of the incident light to change the focal length of the solid state zoom component. If the solid state zoom component is a deformable mirror as shown in fig. 5, the processing and control component can be used to control the electrostatic force or electromagnetic force driving the solid state zoom component to change the focal length of the solid state zoom component.
Further, optionally, the processing and control component may send a focusing instruction to the solid state zoom component to control the solid state zoom component to change the focal length to move the virtual image, i.e., adjust the position of the virtual image.
It should be noted that, the processing control component may also send a display instruction to the display component to control the display component to display the image of the target.
Illustratively, the process control component may be a processor, a microprocessor, a controller, etc., such as a general purpose Central Processing Unit (CPU), a general purpose processor, a Digital Signal Processing (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof.
Based on the above, the present application provides a vision training method, please refer to fig. 7, which can be applied to the display device shown in any one of the embodiments of fig. 1a to 6b. It will also be appreciated that the vision training method may be implemented based on the display device shown in any of the embodiments of fig. 1a to 6b described above.
The method comprises the following steps:
and step 701, acquiring vision parameters of a user.
Specifically, the vision parameters of the user include the near vision degrees of the user's eyes, the far vision degrees of the user's eyes, the presbyopia degrees of the user's eyes, and the like.
In one possible implementation, after the user wears the display device and selects to enter a vision training mode (or referred to as a vision training scenario), the display device may prompt the user to input a training range of vision. For example, the display device may display an option for inputting the vision parameter of the user on the interface, or the display device prompts the user to input the vision parameter of the user in a voice manner, or the display device may prompt the user to input the vision parameter of the user in other manners, which is not limited in this application.
For the convenience of description of the scheme, the following description takes the vision parameters of the user as binocular myopic degrees as an example.
As shown in fig. 8a, a schematic diagram illustrating a first interface is exemplary for the present application. The first interface is used for displaying an option frame of the left-eye myopia degree and an option frame of the right-eye myopia degree, and binocular vision parameters of a user can be selected by pulling down the option frame of the left-eye myopia degree and the option frame of the right-eye myopia degree. Note that the left-200 degree indicates left-eye myopia at 200 degrees, and the right-300 degree indicates right-eye myopia at 300 degrees.
Referring to fig. 8b, a schematic view of another first interface is shown for purposes of illustration. The first interface is used for displaying the virtual keyboard, the input frame of the myopia degree of the left eye and the input frame of the myopia degree of the right eye. The left-eye myopia degree can be input in the left-eye vision frame through the virtual keyboard, and the right-eye myopia degree can be input in the right-eye vision frame through the virtual keyboard.
The manner of acquiring the vision parameters of the user shown in fig. 8a and 8b is only an example, and the present application is not limited thereto. For example, the user may also input the vision parameters by voice.
Step 702, determining a first adjustment range of the virtual image according to the vision parameters and the corresponding relation between the vision parameters and the position of the virtual image.
In one possible implementation, the correspondence between the vision parameters and the positions of the virtual images is stored in advance, as can be seen in table 1.
TABLE 1 correspondence of vision parameters to virtual image position
Parameters of vision Farthest position of virtual image Nearest position of virtual image
-200 degree (myopia 200 degree) 0.5m 0.083m
-300 degree (myopia 300 degree) 0.333m 0.0769m
Based on table 1 above, where-200 degrees represents 200 degrees of myopia, the closest position of the corresponding virtual image is 0.083m and the farthest position of the virtual image is 0.5m; 300 degrees represents 300 degrees of myopia, corresponding to a nearest position of the virtual image of 0.0769m and a farthest position of the virtual image of 0.333.
When the acquired vision parameter of the user is 200 degrees of myopia, the first adjustment range of the virtual image is determined to be 0.083 m-0.5 m by looking up the table 1. It can also be understood that when the user's vision parameter is 200 degrees myopia, the user can see a virtual image of a clear target at a nearest position of 0.083m and at a farthest position of 0.5m.
When the obtained vision parameter of the user is 300 degrees of myopia, the first adjustment range of the virtual image is determined to be 0.0769 m-0.333 m by looking up the table 1. It can also be understood that when the user's vision parameter is 300 degrees myopic, the user can see a virtual image of a clear object at the nearest position of 0.0769m and at the farthest position of 0.333m.
The correspondence relationship between the visual acuity parameters and the positions of the virtual images shown in table 1 is merely an example, and the correspondence relationship between the visual acuity parameters and the positions of the virtual images may be expressed in other forms. In addition, the numerical values of the closest position of the virtual image and the farthest position of the virtual image given in table 1 above are not absolute, and some engineering error may be allowed.
In another possible implementation, the correspondence between the vision parameter and the position of the virtual image can be expressed by imaging formula 1. For example, the vision parameter of the user obtained is 200 degrees (-200 degrees) near vision, and the first adjustment range of the virtual image of the user may be determined to be V based on the following imaging formula 1 far =0.5m to V near =0.083m. It will be appreciated that the range of photopic vision for a person with normal vision is (0.1 m, ∞), i.e. the farthest position U of the virtual image far = infinity, nearest position U of virtual image near =0.1m。
1/U-1/V =1/F equation 1
Wherein, U is the photopic vision range of the person with normal vision, V is the photopic vision range of the user, and F is the focal length of the correction lens. It should be understood that the photopic range refers to the range of positions where the virtual image is located when a clear image can be seen by the binoculars. When the correcting lens is a concave lens, the focal length of the correcting lens is a negative number; when the correction lens is a convex lens, the focal length of the correction lens is a positive number. The focal length of the correcting lens is related to the vision parameters of the user, taking the correcting lens as a concave lens as an example, the focal length F of the correcting concave lens corresponding to 200 degrees of myopia is-0.5 m. That is, if the user is 200 degrees nearsighted, the photopic vision range of the person with normal vision can be achieved by the correcting concave lens with the focal length F of-0.5 m.
For another example, the obtained vision parameter of the user is 300 degrees (-300 degrees) of myopia, and based on the above-mentioned imaging formula 1, the range of photopic vision of the person with normal vision is (0.1 m, ∞), and the focal length F of the correcting concave lens corresponding to-300 degrees is-1/3 m, it can be determined that the first adjustment range of the virtual image of the user is V far =0.333m to V near =0.0769m。
And 703, adjusting the focal length of the solid-state zooming assembly to move the virtual image according to the first adjusting range, and determining a first motion track of the virtual image.
In connection with the above-described structure of the solid state zoom assembly, the manner of adjusting the focal length of the solid state zoom assembly is described below.
If the solid state zoom component is the liquid crystal lens shown in fig. 2a, the liquid crystal lens shown in fig. 2c, the active liquid crystal PB lens shown in fig. 2d, the passive liquid crystal PB lens shown in fig. 2d, the liquid lens shown in fig. 4a, or the liquid lens shown in fig. 4b, the range of the first electrical signal applied to the solid state zoom component can be determined according to the first adjustment range, and the first electrical signal applied to the solid state zoom component is changed within the range of the first electrical signal, so that the adjustment of the focal length of the solid state zoom component to move the virtual image can be realized. It will be appreciated that by varying the focal length of the solid state zoom assembly, the position at which the virtual image is formed is varied.
Illustratively, the first electrical signal is exemplified by a first voltage signal, which may be applied to the solid-state zoom assembly in a gradual manner from small to large within a range of the first voltage signal, so as to realize that the focal length of the solid-state zoom assembly also changes in a gradual manner, i.e. the virtual images seen by the two eyes of the user also move in a gradual manner. Alternatively, the first voltage signal may be applied to the solid-state zoom assembly in a gradual manner from large to small within the range of the first voltage signal, so as to change the focal length of the solid-state zoom assembly in a gradual manner, that is, the virtual images seen by the two eyes of the user also move in a gradual manner. Alternatively, the first voltage signal may be applied to the solid zoom assembly in a random magnitude (i.e., discrete magnitude) within the range of the first voltage signal to achieve that the focal length of the solid zoom assembly changes in a random manner, i.e., that the virtual images seen by the user's binoculars move randomly. Or any other possible way of changing the focal length, which is not limited in this application.
If the solid-state zoom component is the deformable mirror shown in the above 5, the range of the electrostatic force applied to the deformable mirror may be determined according to the first adjustment range, and the deformable mirror is driven to deform or displace within the range of the electrostatic force, so as to move the virtual image. Alternatively, a range of an electromagnetic force applied to the deformable mirror may be determined according to the first adjustment range, and the deformable mirror may be driven to be deformed or displaced within the range of the electromagnetic force to realize the movement of the virtual image. It should be understood that the virtual images may be moved in a progressive manner, or may be moved randomly, which is not limited in this application.
Further, optionally, the position of the movement of the virtual image may be counted to obtain a first motion trajectory of the virtual image. It will also be appreciated that by adjusting the focal length of the solid state zoom assembly, movement of the virtual image within the first adjustment range may be achieved. Please refer to fig. 9a, which is a schematic diagram of a moving track of a virtual image according to the present application. Fig. 9a is an example in which the virtual image moves in a gradual manner, that is, the focal length of the solid-state zoom assembly is adjusted in a gradual manner according to the first adjustment range, so that the virtual image can move in a gradual manner in the first adjustment range. Fig. 9a is an example of a virtual image moving progressively from the closest to the two eyes to the farthest and then from the farthest to the closest within the first adjustment range. It should be noted that the progressive method shown in fig. 9a is merely an example, or the virtual image may be moved from the farthest position to the nearest position from the eyes and then moved from the nearest position to the farthest position.
Fig. 9b is a schematic diagram of a moving track of another virtual image provided in the present application. Fig. 9b is a diagram in which the virtual images move in a random manner. It is also understood that the virtual image moves from the current position to a non-adjacent position, i.e. may be a jumping movement, within the first adjustment range.
In order to reduce the Vergence Adjustment Conflict (VAC) as much as possible, it is necessary to move the virtual image and also to change the position of the parallax depth (or called vergence depth) of the image of the object. Referring to fig. 10, the convergence accommodation conflict is caused by that the correct binocular focusing depth (or called convergence depth) is always fixed on the screen when the human eyes observe three-dimensional (3D) content, and the binocular convergence converges at the target distance defined by the parallax, which may be located in front of the screen or behind the screen, causing the convergence accommodation conflict due to the mismatching of the focusing depth and the convergence depth.
In one possible implementation, the position of the parallax depth of the image of the target may be changed by changing the relative position of the target in the screen. For example, the virtual image is at the A1 position, and the relative position of the object in the screen may be changed so that the image parallax depth is also at the A1 position.
Step 704, a first motion trajectory of a binocular viewpoint of a user is obtained.
Here, the horizontal axis of the first motion trajectory of the binocular viewpoint may be time, and the vertical axis may be a binocular depth of focus.
In one possible implementation, the binocular lines of the user are required to track the movement of the virtual image in real time as it moves. Illustratively, the first motion trajectory of the binocular viewpoint may be acquired by an eyeball tracking device.
In one possible implementation, the display apparatus may further include an eyeball tracking device for determining a convergence depth of the binocular fixation image (i.e., binocular viewpoints), and the eyeball tracking device may determine a first movement trajectory of the binocular viewpoints during movement of the binocular viewpoints along with the virtual image. The convergence depth of the binocular fixation image can be referred to the above description of fig. 10, and will not be described herein.
In another possible implementation, the display device and the eye tracking apparatus are two separate devices, and the display device can communicate with the eye tracking apparatus. The eyeball tracking device is used for determining the convergence depth of the binocular fixation image, and the eyeball tracking device can determine a first motion track of the binocular viewpoint in the moving process of the binocular fixation image along with the virtual image. Further, optionally, the eyeball tracking device may send the acquired first motion trajectory of the binocular viewpoint to the display device.
Step 705, training and evaluating the eyesight of the user according to the first motion trail of the binocular viewpoint and the first motion trail of the virtual image.
In a possible implementation manner, the number of coincidences of points of the first motion trajectory of the binocular viewpoint and the first motion trajectory of the virtual image may be counted, and the ratio of the number of coincidences to the total number of points is determined as the coincidence degree of the first motion trajectory of the binocular viewpoint and the first motion trajectory of the virtual image. Alternatively, the degree of coincidence of the points of the first motion trajectory with the points of the first motion trajectory of the virtual image may be determined by a fitting algorithm (for example, a polynomial fitting algorithm, a least-squares fitting algorithm, or the like).
If the contact ratio of the first motion trail of the binocular viewpoint and the first motion trail of the virtual image is greater than a first threshold value, it is indicated that in the moving process of the virtual image, the binocular can move along with the movement of the virtual image under most conditions, so that the training of the ciliary muscles of the two purposes is realized, the display equipment can be further realized to have the function of preventing and/or correcting pseudomyopia, and the standard of the training of the vision of the two purposes can be determined. Wherein the first threshold value is, for example, 70%.
If the coincidence degree of the first motion trail of the binocular viewpoint and the first motion trail of the virtual image is not greater than the first threshold value, it indicates that the binocular cannot track the movement of the virtual image in time under most conditions, and therefore the movement of the virtual image can be tracked as well as possible by reducing the first adjustment range of the virtual image.
In one possible implementation, a minimum value in the first adjustment range may be multiplied by a factor greater than 1, and/or a maximum value in the first adjustment range may be multiplied by a factor less than 1. Alternatively, the minimum value in the first control range is added to a value greater than 0 and/or the maximum value in the first control range is subtracted to a value greater than 0. It should be understood that the value of the minimum value plus in the first adjustment range may or may not be the same as the value of the maximum value minus in the first adjustment range, and the application is not limited thereto.
Further, optionally, the display device may display, through the display assembly, a coincidence degree of points of the first motion trajectory of the binocular viewpoint and the first motion trajectory of the virtual image, so that the user can timely learn a result of the vision training. It should be understood that the greater the degree of coincidence, the better the vision training.
As can be seen from steps 701 to 705 above, the virtual image is moved by adjusting the focal length of the solid state zoom assembly, the noise is small when the focal length of the solid state zoom assembly is adjusted, and the virtual image can be moved quickly and randomly. In the moving process of the virtual image, the binocular of the user can track the moving process of the virtual image, the training of the ciliary muscles of the two eyes can be realized, and therefore the display equipment can have the function of preventing and/or correcting pseudomyopia.
In step 702 described above, the determination of the first adjustment range of the virtual image may be that the user manually inputs the first adjustment range of the virtual image directly. For example, the user may manually rotate the knob to change the position of the virtual image, when the user observes a clear image, read a first position of the virtual image on the knob and input the first position into the display device, continue to rotate the knob, when the user is about to see a blurred image, read a second position of the virtual image on the knob and input the second position into the display device, where the range between the first position and the second position is the first adjustment range of the virtual image.
It should be noted that the first adjustment range shown in fig. 7 is a range in which the virtual image is moved in the front-rear direction of the display unit, thereby performing training of the front-rear direction of the eyes of the user.
In order to further improve the binocular pseudomyopia prevention and/or correction effect of the user, training can be performed on the binocular of the user in the up-down direction and the left-right direction.
In one possible implementation, the adjustment range of the virtual image further includes a second adjustment range along upper and lower boundaries of the display screen. It will also be appreciated that the second adjustment range is the distance between the upper and lower boundaries of the display screen, as described above with reference to fig. 8a.
Further, optionally, the virtual image may be moved in the direction of the upper and lower boundaries of the display screen according to the second adjustment range, and a second motion trajectory of the virtual image may be obtained. For example, the virtual image may be moved in a random manner between the upper and lower boundaries of the display screen, or may be moved in a progressive manner between the upper and lower boundaries.
In the process that the virtual image moves, the eyeball tracks the movement of the virtual image, so that the eyeball tracking device can acquire the second motion trail of the binocular viewpoint, and if the contact ratio of the second motion trail of the binocular viewpoint and the second motion trail of the virtual image is larger than a second threshold value, it is determined that the eyesight training is up to standard. The second threshold may be, for example, 70%.
In another possible implementation, the adjustment range of the virtual image further includes a third adjustment range along left and right boundaries of the display screen. It will also be appreciated that the third adjustment range is the distance between the left and right borders of the display screen, as can be seen in fig. 8a above.
Further, the virtual image may be moved in the direction of the left and right boundaries of the display screen according to a third adjustment range, and a third motion trajectory of the virtual image is obtained. Illustratively, the virtual image may be moved in a random manner between the left and right boundaries of the display screen, or in a progressive manner between the left and right boundaries.
In the process of moving the virtual image, the eyeball can track the movement of the virtual image, so that the eyeball tracking device obtains a third motion trail of the binocular viewpoint; and if the coincidence degree of the third motion trail of the binocular viewpoint and the third motion trail of the virtual image is greater than a third threshold value, determining that the eyesight training reaches the standard. Wherein the third threshold value may be 70%, for example.
The first threshold, the second threshold, and the third threshold may be the same or different. The above is exemplified by the first threshold, the second threshold, and the third threshold all being 70%.
In one possible implementation manner, the training of the eyesight can be performed in three stages, the first stage can be a binocular forward and backward training stage, and in the first stage, the virtual image can be moved in the first adjusting range; the second stage can be a binocular up-down training stage, and in the second stage, the virtual image can be moved in a second adjusting range; the third stage may be a binocular left and right training stage in which the virtual image may be moved within a third adjustment range. It should be noted that the order of the first stage, the second stage and the third stage can be interchanged, and the present application does not limit this.
In a possible implementation manner, after the first-stage training is completed, if the coincidence degree of the first motion trajectory of the binocular viewpoint and the first motion trajectory of the virtual image is determined to be greater than a first threshold value, the second-stage training can be performed; and if the coincidence degree of the first motion trail of the binocular viewpoint and the first motion trail of the virtual image is not larger than the first threshold value, the first adjusting range can be reduced, the training of the first stage is continued until the coincidence degree of the first motion trail of the binocular viewpoint and the first motion trail of the virtual image is larger than the first threshold value, and the training of the second stage is carried out. Similarly, after the second-stage training is completed, if the coincidence degree of the second motion trail of the binocular viewpoint and the second motion trail of the virtual image is greater than a second threshold value, the third-stage training is performed; if the coincidence degree of the second motion trail of the binocular viewpoint and the second motion trail of the virtual image is not larger than the second threshold value, the second adjusting range can be narrowed, the training of the second stage is continued until the coincidence degree of the second motion trail of the binocular viewpoint and the second motion trail of the virtual image is larger than the second threshold value, and then the training of the third stage is carried out. And if the coincidence degree of the third motion track of the binocular viewpoint and the third motion track of the virtual image is not more than a third threshold value, the third adjustment range can be reduced, and the training in the third stage is continued until the coincidence degree of the third motion track of the binocular viewpoint and the third motion track of the virtual image is more than the third threshold value, so that the training in the third stage is completed.
In another possible implementation manner, after the first-stage training is completed, the contact ratio of the first motion trail of the binocular viewpoint and the first motion trail of the virtual image can be determined; then, performing second-stage training, and after the second-stage training is completed, determining the contact ratio of the second motion trail of the binocular viewpoint and the second motion trail of the virtual image; and then, training in a third stage is carried out, and after the training in the third stage is finished, the contact ratio of the third motion trail of the binocular viewpoint and the third motion trail of the virtual image is determined. Further, determining the total contact ratio according to the three contact ratios obtained in the three stages, and determining whether the binocular vision training for the user reaches the standard or not based on the total contact ratio. Specifically, if the total contact ratio is greater than a fourth threshold value, the binocular vision training for the user is determined to reach the standard, wherein the fourth threshold value may be 70%. Wherein the total degree of coincidence may for example be equal to a weighted average of the three degrees of coincidence obtained in the three stages. As another example, the total overlap ratio is the maximum of the three overlap ratios obtained in the three stages. As another example, the total overlap ratio is the minimum of the three overlap ratios obtained in the three stages.
The fourth threshold may be the same as any one of the first threshold, the second threshold, and the third threshold, or may be a weighted average of the first threshold, the second threshold, and the third threshold, or may be a value different from any one of the first threshold, the second threshold, and the third threshold, which is not limited in this application.
It is understood that, in order to implement the functions of the above embodiments, the vision training apparatus includes hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the various illustrative modules and method steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software driven hardware depends on the particular application scenario and design constraints imposed on the solution.
Based on the above and the same concept, fig. 11 is a schematic structural diagram of a possible vision training device provided by the present application. The vision training devices can be used for realizing the functions of the method embodiments, and therefore, the beneficial effects of the method embodiments can also be realized. In the present application, the vision training apparatus may be a display device as shown in fig. 1a, or may be a module (e.g. a chip) in the display device as shown in fig. 1 a.
As shown in fig. 11, the vision training apparatus 1100 includes a processing module 1101 and an acquisition module 1102. The vision training apparatus 1100 is used to implement the functions described above in the method embodiment shown in figure 7.
When the vision training apparatus 1100 is used to implement the functionality of the method embodiment shown in figure 7: the obtaining module 1102 is configured to obtain a vision parameter of a user; the processing module 1101 is configured to determine a first adjustment range of a virtual image according to the vision parameter and a corresponding relationship between the vision parameter and a position of the virtual image; the obtaining module 1102 is further configured to obtain a first motion trajectory of the binocular viewpoint of the user; the processing module 1101 is further configured to train and evaluate the eyesight of the user according to the first motion trajectory of the binocular viewpoint and the first motion trajectory of the virtual image.
More detailed descriptions about the processing module 1101 and the obtaining module 1102 can be directly obtained by referring to the related descriptions in the embodiment of the method shown in fig. 7, and are not repeated here.
It should be understood that the processing module 1101 in the embodiments of the present application may be implemented by a processor or a processor-related circuit component, and the obtaining module 1102 may be implemented by a transceiver or a transceiver-related circuit component.
It is understood that the processor in the embodiments of the present application may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. The general purpose processor may be a microprocessor, but may be any conventional processor.
The method steps in the embodiments of the present application may be implemented by hardware, or may be implemented by software instructions executed by a processor. The software instructions may consist of corresponding software modules that may be stored in Random Access Memory (RAM), flash memory, read-only memory (ROM), programmable ROM, erasable PROM (EPROM), electrically EPROM (EEPROM), registers, a hard disk, a removable hard disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a vision training apparatus. Of course, the processor and the storage medium may reside as discrete components in the vision training apparatus.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. The procedures or functions of the embodiments of the application are performed in whole or in part when the computer program or instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a computer network, a vision training apparatus, a user device, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wire or wirelessly. A computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that incorporates one or more available media. The available media may be magnetic media, such as floppy disks, hard disks, magnetic tape; or optical media such as Digital Video Disks (DVDs); it may also be a semiconductor medium, such as a Solid State Drive (SSD).
In various embodiments of the present application, unless otherwise specified or conflicting, terms and/or descriptions between different embodiments have consistency and may be mutually referenced, and technical features in different embodiments may be combined to form a new embodiment according to their inherent logical relationships.
In the present application, "and/or" describes an association relationship of associated objects, which means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In the description of the text of this application, the character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In the formula of the present application, the character "/" indicates that the preceding and following related objects are in a relationship of "division". Also, in the present application, the word "exemplary" is used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Or it may be appreciated that the use of the word exemplary is intended to present concepts in a concrete fashion, and is not intended to limit the scope of the present application.
It is to be understood that the various numerical designations referred to in this application are merely for ease of description and are not intended to limit the scope of the embodiments of the present application. The sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of the processes should be determined by their functions and inherent logic. The terms "first," "second," and the like, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprises" and "comprising," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such as a list of steps or elements. The methods, systems, articles, or apparatus need not be limited to the explicitly listed steps or elements, but may include other steps or elements not expressly listed or inherent to such processes, methods, articles, or apparatus.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely illustrative of the concepts defined by the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to encompass such modifications and variations.

Claims (20)

1. A vision training method is applied to a display device, the display device comprises a display component and an optical imaging component, the display component is used for displaying an image of a target, the optical imaging component is used for forming the image into a virtual image, and the optical imaging component comprises a solid state zooming component;
the method comprises the following steps:
acquiring vision parameters of a user;
determining a first adjusting range of the virtual image according to the vision parameters and the corresponding relation between the vision parameters and the position of the virtual image;
adjusting the focal length of the solid zoom component according to the first adjustment range, moving the virtual image, and determining a first motion trail of the virtual image;
acquiring a first motion trail of the binocular viewpoint of the user;
and training and evaluating the vision of the user according to the first motion trail of the binocular viewpoint and the first motion trail of the virtual image.
2. The method of claim 1, wherein the solid state zoom component comprises a zoom lens;
adjusting the focal length of the solid zoom component according to the first adjustment range, and moving the virtual image, including:
determining a range of a first electrical signal applied to the zoom lens according to the first adjustment range;
and in the range of the first electric signal, changing the first electric signal applied to the zoom lens, adjusting the focal length of the zoom lens, and moving the virtual image.
3. The method of claim 1, wherein the solid state zoom component comprises a deformable mirror;
the adjusting the focal length of the solid-state zoom component according to the first adjustment range to move the virtual image includes:
determining a range of electrostatic force or a range of electromagnetic force applied to the deformable mirror according to the first adjustment range;
and driving the deformable reflector to deform or displace within the range of the electrostatic force or the range of the electromagnetic force, and moving the virtual image.
4. The method of claim 1, wherein the correspondence of the vision parameter to the location of the virtual image satisfies: the difference between the reciprocal of the photopic vision range of the person with normal vision and the reciprocal of the photopic vision range of the user is equal to the focal length of the correction lens;
wherein the focal length of the corrective lens is related to a vision parameter of the user.
5. The method of any one of claims 1 to 4, wherein the training evaluation of the vision of the user based on the first motion trajectory of the binocular viewpoint and the first motion trajectory of the virtual image comprises:
and if the coincidence degree of the first motion trail of the binocular viewpoint and the first motion trail of the virtual image is greater than a first threshold value, determining that the vision training reaches the standard.
6. The method of claim 5, wherein the display component comprises a display screen;
the method further comprises the following steps:
moving the virtual image within a second adjustment range along the upper and lower boundaries of the display screen, and determining a second motion trajectory of the virtual image;
acquiring a second motion trail of the binocular viewpoint;
and if the coincidence degree of the second motion trail of the binocular viewpoint and the second motion trail of the virtual image is greater than a second threshold value, determining that the eyesight training reaches the standard.
7. The method of claim 5 or 6, wherein the display component comprises a display screen;
the method further comprises the following steps:
moving the virtual image within a third adjustment range along the left and right boundaries of the display screen, and determining a third motion trajectory of the virtual image;
acquiring a third motion trail of the binocular viewpoint;
and if the coincidence degree of the third motion trail of the binocular viewpoint and the third motion trail of the virtual image is greater than a third threshold value, determining that the vision training reaches the standard.
8. The method of any of claims 1 to 7, further comprising:
and if the coincidence degree of the first motion trail of the binocular viewpoint and the first motion trail of the virtual image is not more than a first threshold value, reducing the first adjusting range of the virtual image.
9. The method of any of claims 1 to 8, wherein moving the virtual image comprises:
moving the virtual image in a progressive manner; or,
the virtual images are moved in a random manner.
10. The vision training device is applied to a display device, the display device comprises a display component and an optical imaging component, the display component is used for displaying an image of a target, the optical imaging component is used for forming the image into a virtual image, and the optical imaging component comprises a solid state zooming component;
the vision training device includes:
the acquisition module is used for acquiring the vision parameters of the user;
the processing module is used for determining a first adjusting range of the virtual image according to the vision parameters and the corresponding relation between the vision parameters and the position of the virtual image;
adjusting the focal length of the solid-state zooming assembly according to the first adjusting range, moving the virtual image, and determining a first motion trail of the virtual image;
the acquisition module is further used for acquiring a first motion track of the binocular viewpoint of the user;
the processing module is further configured to train and evaluate the eyesight of the user according to the first motion trajectory of the binocular viewpoint and the first motion trajectory of the virtual image.
11. The apparatus of claim 10, wherein the solid state zoom component comprises a zoom lens;
the processing module is specifically configured to:
determining a range of a first electrical signal applied to the zoom lens according to the first adjustment range;
and in the range of the first electric signal, changing the first electric signal applied to the zoom lens, adjusting the focal length of the zoom lens, and moving the virtual image.
12. The apparatus of claim 10, wherein the solid state zoom component comprises a deformable mirror;
the processing module is specifically configured to:
determining a range of electrostatic force or a range of electromagnetic force applied to the deformable mirror according to the first adjustment range;
and driving the deformable reflector to deform or displace within the range of the electrostatic force or the range of the electromagnetic force, and moving the virtual image.
13. The apparatus of claim 10, wherein the vision parameter corresponds to the location of the virtual image by: the difference between the reciprocal of the photopic range of the sighted person and the reciprocal of the photopic range of the user is equal to the focal length of the correction lens;
wherein the focal length of the corrective lens is related to the vision parameters of the user.
14. The apparatus according to any one of claims 10 to 13, wherein the processing module is specifically configured to:
and if the coincidence degree of the first motion track of the binocular viewpoint and the first motion track of the virtual image is greater than a first threshold value, determining that the eyesight training reaches the standard.
15. The apparatus of claim 14, wherein the display component comprises a display screen;
the processing module is further configured to:
moving the virtual image within a second adjustment range along the upper and lower boundaries of the display screen, and determining a second motion trajectory of the virtual image;
the obtaining module is further configured to:
acquiring a second motion trail of the binocular viewpoint;
the processing module is further configured to:
and if the coincidence degree of the second motion trail of the binocular viewpoint and the second motion trail of the virtual image is greater than a second threshold value, determining that the eyesight training reaches the standard.
16. The apparatus of claim 14 or 15, wherein the display component comprises a display screen;
the processing module is further configured to:
moving the virtual image within a third adjustment range along the left and right boundaries of the display screen, and determining a third motion trajectory of the virtual image;
the obtaining module is further configured to:
acquiring a third motion trail of the binocular viewpoint;
the processing module is further configured to:
and if the coincidence degree of the third motion trail of the binocular viewpoint and the third motion trail of the virtual image is greater than a third threshold value, determining that the vision training reaches the standard.
17. The apparatus of any of claims 10 to 16, wherein the processing module is further configured to:
and if the coincidence degree of the first motion trail of the binocular viewpoint and the first motion trail of the virtual image is not more than a first threshold value, reducing the first adjusting range of the virtual image.
18. The apparatus according to any one of claims 10 to 17, wherein the processing module is specifically configured to:
moving the virtual image in a progressive manner; or,
the virtual image is moved in a random manner.
19. A computer-readable storage medium, having stored thereon a computer program or instructions which, when executed by a vision training apparatus, cause the vision training apparatus to carry out the method of any one of claims 1 to 9.
20. A computer program product, characterized in that the computer program product comprises a computer program or instructions which, when executed by a vision training apparatus, causes the vision training apparatus to carry out the method of any one of claims 1 to 9.
CN202110771755.1A 2021-07-08 2021-07-08 Vision training method and device Pending CN115590733A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110771755.1A CN115590733A (en) 2021-07-08 2021-07-08 Vision training method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110771755.1A CN115590733A (en) 2021-07-08 2021-07-08 Vision training method and device

Publications (1)

Publication Number Publication Date
CN115590733A true CN115590733A (en) 2023-01-13

Family

ID=84840264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110771755.1A Pending CN115590733A (en) 2021-07-08 2021-07-08 Vision training method and device

Country Status (1)

Country Link
CN (1) CN115590733A (en)

Similar Documents

Publication Publication Date Title
US10319154B1 (en) Methods, systems, and computer readable media for dynamic vision correction for in-focus viewing of real and virtual objects
US8939579B2 (en) Autofocusing eyewear, especially for presbyopia correction
Chakravarthula et al. Focusar: Auto-focus augmented reality eyeglasses for both real world and virtual imagery
US20200051320A1 (en) Methods, devices and systems for focus adjustment of displays
JP5102289B2 (en) Method for optimizing and / or manufacturing spectacle lenses
US11150476B2 (en) Method for providing a display unit for an electronic information device
US11852949B1 (en) Hybrid varifocal lens
JP2020514926A (en) Depth-based foveated rendering for display systems
US11221479B2 (en) Varifocal optical assembly providing astigmatism compensation
WO2022135284A1 (en) Display module, and method and apparatus for adjusting position of virtual image
WO2012078410A1 (en) Sympathetic optic adaptation for see-through display
US11300805B2 (en) Stereoscopic eyeglasses, method for designing eyeglass lens to be used for the stereoscopic eyeglasses, and method for observing stereoscopic image
US11740459B2 (en) Head-mounted display and method for designing wide-focus lens to be used for the head-mounted display
US11598964B2 (en) Freeform varifocal optical assembly
WO2021092314A1 (en) System and method for displaying an object with depths
KR102489272B1 (en) Near eye display apparatus
Wu et al. Prescription AR: a fully-customized prescription-embedded augmented reality display
Stevens et al. Varifocal technologies providing prescription and VAC mitigation in HMDs using Alvarez lenses
CN115590733A (en) Vision training method and device
US11150437B1 (en) Prescription adjustment methods and systems for varifocal subsystems
Chakravarthula [DC] auto-focus augmented reality eyeglasses for both real world and virtual imagery
Li et al. Dual-focal Plane Augmented Reality Near-eye Display Adopting Liquid Crystal Variable Retarder
KR20210005857A (en) System for generating virtual images for wearers
KR20210150250A (en) Device and method of displaying augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination