WO2022135284A1 - 一种显示模组、虚像的位置调节方法及装置 - Google Patents
一种显示模组、虚像的位置调节方法及装置 Download PDFInfo
- Publication number
- WO2022135284A1 WO2022135284A1 PCT/CN2021/139033 CN2021139033W WO2022135284A1 WO 2022135284 A1 WO2022135284 A1 WO 2022135284A1 CN 2021139033 W CN2021139033 W CN 2021139033W WO 2022135284 A1 WO2022135284 A1 WO 2022135284A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- virtual image
- scene type
- target position
- image
- optical imaging
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 165
- 238000012634 optical imaging Methods 0.000 claims abstract description 457
- 230000003287 optical effect Effects 0.000 claims description 114
- 239000004973 liquid crystal related substance Substances 0.000 claims description 45
- 230000004438 eyesight Effects 0.000 claims description 35
- 230000002452 interceptive effect Effects 0.000 claims description 35
- 239000000463 material Substances 0.000 claims description 16
- 230000004304 visual acuity Effects 0.000 claims description 16
- 238000003384 imaging method Methods 0.000 claims description 15
- 239000007788 liquid Substances 0.000 claims description 14
- 238000005259 measurement Methods 0.000 claims description 14
- 230000007246 mechanism Effects 0.000 claims description 14
- 230000001960 triggered effect Effects 0.000 claims description 14
- 230000005540 biological transmission Effects 0.000 claims description 12
- 238000013473 artificial intelligence Methods 0.000 claims description 5
- 230000015572 biosynthetic process Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims 1
- 230000004308 accommodation Effects 0.000 abstract description 10
- 239000011521 glass Substances 0.000 abstract description 9
- 230000003190 augmentative effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 71
- 230000006870 function Effects 0.000 description 25
- 230000008859 change Effects 0.000 description 24
- 230000004075 alteration Effects 0.000 description 21
- 238000004590 computer program Methods 0.000 description 18
- 230000010287 polarization Effects 0.000 description 15
- 230000033001 locomotion Effects 0.000 description 13
- 208000001491 myopia Diseases 0.000 description 11
- 230000004379 myopia Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 238000006073 displacement reaction Methods 0.000 description 8
- 201000009310 astigmatism Diseases 0.000 description 6
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 229920001621 AMOLED Polymers 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000000644 propagated effect Effects 0.000 description 4
- 238000002834 transmittance Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 201000010041 presbyopia Diseases 0.000 description 3
- 238000002310 reflectometry Methods 0.000 description 3
- 239000011347 resin Substances 0.000 description 3
- 229920005989 resin Polymers 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 210000004087 cornea Anatomy 0.000 description 2
- 230000004424 eye movement Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 229920000642 polymer Polymers 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 229910052710 silicon Inorganic materials 0.000 description 2
- 239000010703 silicon Substances 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 206010010071 Coma Diseases 0.000 description 1
- 208000001692 Esotropia Diseases 0.000 description 1
- 206010019233 Headaches Diseases 0.000 description 1
- 206010020675 Hypermetropia Diseases 0.000 description 1
- 229920000106 Liquid crystal polymer Polymers 0.000 description 1
- 239000004977 Liquid-crystal polymers (LCPs) Substances 0.000 description 1
- 239000004988 Nematic liquid crystal Substances 0.000 description 1
- 208000004350 Strabismus Diseases 0.000 description 1
- 206010047513 Vision blurred Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 235000021028 berry Nutrition 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 208000002173 dizziness Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005684 electric field Effects 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 231100000869 headache Toxicity 0.000 description 1
- 201000006318 hyperopia Diseases 0.000 description 1
- 230000004305 hyperopia Effects 0.000 description 1
- 238000010921 in-depth analysis Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 239000005304 optical glass Substances 0.000 description 1
- 230000005043 peripheral vision Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 208000014733 refractive error Diseases 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000008719 thickening Effects 0.000 description 1
- 238000009736 wetting Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B27/0172—Head mounted characterised by optical features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/001—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
- G09G3/003—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0132—Head-up displays characterised by optical features comprising binocular systems
- G02B2027/0134—Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0179—Display position adjusting means not related to the information to be displayed
- G02B2027/0185—Displaying image at variable distance
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0613—The adjustment depending on the type of the information to be displayed
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0464—Positioning
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2354/00—Aspects of interface with display user
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
Definitions
- the present application relates to the field of display technology, and in particular, to a display module, a method and device for adjusting the position of a virtual image.
- VR technology refers to the combination of virtual and reality, using display optics to generate a virtual world of three-dimensional space, providing users with a simulation of vision and other senses, making users feel as if they are in the real world, and can observe real-time and unlimited objects in three-dimensional space. thing.
- VAC vergence and accommodation conflict
- the vergence accommodation conflict is due to the fact that when the human eye observes 3D content, the correct lens accommodation distance of the eyes is always fixed on the screen, while the vergence of the eyes will focus on the target distance defined by the parallax, which may be in front of the screen or behind the screen , the mismatch between accommodation distance and vergence distance causes vergence accommodation conflict.
- VAC is a phenomenon that occurs when viewing most 3D content these days, whether in a near-eye display device or with 3D glasses.
- the present application provides a display module, a method and a device for adjusting the position of a virtual image, which can automatically adjust the position of the virtual image based on different preset scene types, which helps to reduce the conflict of vergence adjustment.
- the present application provides a display module, which may include a display component, an optical imaging component, and a virtual image position adjustment component.
- the display component is used to display images.
- Optical imaging components are used to form images into virtual images.
- the virtual image position adjustment component is used to adjust the optical imaging component and/or the display component, so that the virtual image can be adjusted to a target position, and the target position of the virtual image is related to the preset scene type to which the image belongs.
- the optical imaging assembly can change the propagation path of the light carrying the image to form a virtual image of the image at the target location.
- the optical imaging assembly and/or the display assembly can be adjusted by the virtual image position adjustment assembly, so that the virtual images under different preset scene types can be adjusted to different positions accurately, so that the user can clearly see the displayed image of the display module.
- image Automatically adjust the position of the virtual image based on different preset scene types, helping to reduce vergence adjustment conflicts.
- the preset scene type to which the image belongs may be the preset scene type to which the content of the image belongs; or, the preset scene type to which the object corresponding to the image belongs.
- the display module may further include a control component, and the control component may be used to obtain the target position of the virtual image, and control the virtual image position adjustment component to adjust the optical imaging component and/or the display component to adjust the virtual image to the target position .
- the virtual image position adjustment assembly is controlled by the control assembly to adjust the optical imaging assembly and/or the display assembly, so that the virtual image can be adjusted to the target position.
- control component can be used to obtain the first preset scene type to which the image displayed by the display component belongs and the corresponding relationship between the preset scene type and the position of the virtual image, according to the corresponding relationship between the preset scene type and the position of the virtual image, A target position corresponding to the first preset scene type is determined.
- control component is used to obtain the vision parameter, the first preset scene type to which the image displayed by the display component belongs, and the corresponding relationship between the preset scene type and the position of the virtual image; and according to the vision parameter, and the corresponding relationship between the preset scene type and the position of the virtual image, to determine the target position corresponding to the first preset scene type.
- the target positions where the virtual image is presented by the display module are different. In this way, images belonging to different preset scene types can be formed into virtual images at different target positions, thereby helping to reduce the vergence adjustment depth.
- the preset scene type may be an office scene type, a reading scene type, a meeting scene type, an interactive game scene type or a video scene type.
- the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component ranges from [0.1 to 10] diopter D; when the image belongs to the preset scene type When it is a reading scene type, the distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component ranges from [0.5 to 10] diopter D; when the preset scene type to which the image belongs is a conference scene type, the head-mounted display The distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component is in the range of [0.1 to 7.1] diopter D; when the preset scene type to which the image belongs is an interactive game scene type, the head-mounted display device presents the virtual image.
- the distance between the target position and the optical imaging component ranges from [0.5 to 7.5] diopter D; when the preset scene type to which the image belongs is a video scene type, the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component The distance range is [0.1 ⁇ 7] diopter D.
- the distance range between the target position where the head-mounted display device presents the virtual image and the optical imaging component is [0.1 ⁇ 3.0] diopter D; or, when the preset scene type to which the image belongs is the interactive game scene type, the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component The range is [3.0-5.0] diopter D; or, when the preset scene type to which the image belongs is the video scene type, the distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component For (5.0 ⁇ 7] diopter D.
- the distance between the target position of the virtual image corresponding to the video scene type and the optical imaging component is greater than the distance between the target position of the virtual image corresponding to the conference scene type and the optical imaging component; or, the conference scene type
- the distance between the target position of the corresponding virtual image and the optical imaging assembly is greater than the distance between the target position of the virtual image corresponding to the reading scene type and the optical imaging assembly.
- the distance between the target position of the virtual image corresponding to the video scene type and the optical imaging component is greater than the distance between the target position of the virtual image corresponding to the conference scene type and the optical imaging component; or, the conference scene
- the distance between the target position of the virtual image corresponding to the type and the optical imaging component is greater than the distance between the target position of the virtual image corresponding to the office scene type and the optical imaging component; or, the distance between the target position of the virtual image corresponding to the office scene type and the optical imaging component. The distance between them is greater than the distance between the target position of the virtual image corresponding to the reading scene type and the optical imaging component.
- the virtual image position adjustment component includes a driving component; the driving component is configured to drive the optical imaging component and/or the display component to move, so as to adjust the virtual image to the target position .
- the virtual image position adjustment component includes a driving component and a position sensing component; wherein, the position sensing component can be used to determine the position of the optical imaging component and/or the display component, and the optical imaging component and/or the display component The position is used to determine the first distance between the display assembly and the optical imaging assembly, and the first distance is used to determine the to-be-moved distance of the optical imaging assembly and/or the display assembly; alternatively, the position sensing assembly can be used to determine the optical imaging assembly and / or display the first distance between components.
- the driving component can be used to drive the optical imaging component and/or the display component to move according to the distance to be moved, so as to adjust the virtual image to the target position.
- the adjustment accuracy of the virtual image position adjustment component is determined according to the driving error of the driving component and the position measurement error of the position sensing component.
- the adjustment precision of the virtual image position adjustment assembly is not greater than 0.2 diopters D.
- the optical imaging assembly includes a half mirror, the said Wherein, the r1 is the most approximate spherical radius of the transmission surface of the half mirror, the r2 is the most approximate spherical radius of the semi-transparent surface of the half mirror, and the n is the The refractive index of the material of the half mirror.
- the adjustment range of the virtual image position adjustment component is determined according to the driving range of the driving component and the measurement range of the position sensing component.
- the adjustment range of the virtual image position adjustment assembly is not less than 5 diopters D.
- the optical imaging assembly includes a half mirror, the said Wherein, the r1 is the most approximate spherical radius of the transmission surface of the half mirror, the r2 is the most approximate spherical radius of the semi-transparent surface of the half mirror, and the n is the The refractive index of the material of the half mirror.
- the virtual image position adjustment component includes a drive component, and the optical imaging component includes a zoom lens; the drive component is used to change the voltage signal or current signal applied to the zoom lens, so as to change the The focal length of the zoom lens adjusts the virtual image to the target position.
- the zoom lens may be a liquid crystal lens, a liquid lens or a geometric phase lens.
- the virtual image position adjustment component includes a driving component and a position sensing component
- the optical imaging component includes a zoom lens.
- the position sensing component can be used to determine the first focal length of the zoom lens, and the first focal length is used to determine the focal length of the zoom lens to be adjusted;
- the driving component can be used to change the voltage signal or current signal applied to the zoom lens according to the focal length to be adjusted, Adjust the virtual image to the target position.
- the virtual image position adjustment component includes a driving component and a position sensing component;
- the optical imaging component includes a first diffractive optical element and a second diffractive optical element;
- the position sensing component is used for determining the relative angle of the first diffractive optical element and the second diffractive optical element, the relative angle of the first diffractive optical element and the second diffractive optical element is used to determine the first diffractive optical element and/or Or the to-be-rotated angle of the second diffractive optical element;
- the drive assembly is configured to drive the first diffractive optical element and/or the second diffractive optical element to rotate according to the to-be-rotated angle to adjust the virtual image to the target location.
- the virtual image position adjustment assembly includes a driving assembly and a position sensing assembly;
- the optical imaging assembly includes a first refractive optical element and a second refractive optical element;
- the position sensing assembly is used for In a direction perpendicular to the principal optical axes of the first refractive optical element and the second refractive optical element, a first distance between the first refractive optical element and the second refractive optical element is determined, the first refractive optical element The distance is used to determine the distance to be moved by the first refractive optical element and/or the second refractive optical element;
- the driving component is: according to the to-be-moved distance, drive the first refractive optical element and/or the The second refractive optical element moves in a direction perpendicular to the main optical axis to adjust the virtual image to the target position.
- the display module further includes an eye-tracking component; the eye-tracking component is used to determine the convergence depth of binocular gaze at the image; the virtual image position adjustment component is used to the convergence depth, drive the imaging optical assembly and/or the display assembly to move, and adjust the virtual image to the target position.
- the user can clearly view the image displayed by the display component, and it can help reduce the conflict of vergence adjustment.
- the absolute value of the difference between the distance between the target position of the virtual image and the human eye and the binocular convergence depth of the human eye is smaller than a threshold. In this way, by adjusting the virtual image to the target position, it helps to reduce the conflict of vergence adjustment.
- the threshold range is [0 diopter D, 1 diopter D].
- the display module may further include a cylindrical lens and a rotation driving assembly, and the rotating driving assembly is used to change the optical axis of the cylindrical lens.
- cylindrical lens is located between the display component and the optical imaging component, or is located on the side of the optical imaging component away from the display component.
- the present application provides a method for adjusting the position of a virtual image, which can be applied to a head-mounted display device.
- the method may include acquiring an image displayed by the head mounted display device and a target position of a virtual image corresponding to the image, and forming the image at the target position as a virtual image, where the target position of the virtual image is related to the preset scene type to which the image belongs.
- the preset scene type to which the image belongs may be the preset scene type to which the content of the image belongs; or, the preset scene type to which the object corresponding to the image belongs.
- the target positions where the virtual image is presented by the head-mounted display device are different.
- the preset scene types include office scene types, reading scene types, meeting scene types, interactive game scene types, or video scene types.
- the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component ranges from [0.1 to 10] diopter D; when the image belongs to the preset scene type When it is a reading scene type, the distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component ranges from [0.5 to 10] diopter D; when the preset scene type to which the image belongs is a conference scene type, the head-mounted display The distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component is in the range of [0.1 to 7.1] diopter D; when the preset scene type to which the image belongs is an interactive game scene type, the head-mounted display device presents the virtual image.
- the distance between the target position and the optical imaging component ranges from [0.5 to 7.5] diopter D; when the preset scene type to which the image belongs is a video scene type, the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component The distance range is [0.1 ⁇ 7] diopter D.
- the head-mounted display device includes a control component, two ways of acquiring the target position corresponding to the image can be exemplarily shown.
- the head-mounted display device includes a control component.
- the first preset scene type to which the image displayed by the head-mounted display device belongs and the corresponding relationship between the preset scene type and the position of the virtual image can be obtained, and the preset scene type and the position of the virtual image can be obtained according to the preset scene type and the position of the virtual image.
- the corresponding relationship of the first preset scene type is determined, and the target position corresponding to the first preset scene type is determined.
- the first preset scene type to which the image sent by the terminal device belongs can be received.
- the first preset scene type to which the image belongs may also be determined.
- the head-mounted display device does not include a control component.
- the target position of the virtual image corresponding to the image sent by the terminal device can be received.
- the head-mounted display device determines the to-be-moved distance of the display component and/or the optical imaging component.
- the head-mounted display device includes a display assembly and an optical imaging assembly. Further, the first distance between the display assembly and the optical imaging assembly can be obtained, and the to-be-moved distance of the display assembly and/or the optical imaging assembly can be determined according to the first distance and the target position, and the display assembly and/or the to-be-moved distance can be driven according to the to-be-moved distance.
- the optical imaging assembly moves to adjust the virtual image to the target position.
- the head-mounted display device receives the to-be-moved distance of the display component and/or the optical imaging component sent by the terminal device.
- the head-mounted display device includes a display assembly and an optical imaging assembly. Further, optionally, the distance to be moved of the display component and/or the optical imaging component can be received from the terminal device, and the display component and/or the optical imaging component are driven to move according to the distance to be moved to adjust the virtual image to the target position.
- the head-mounted display device determines the focal length of the zoom lens to be adjusted.
- the head-mounted display device includes a display component and an optical imaging component, and the optical imaging component includes a zoom lens.
- the first focal length of the zoom lens can be determined, the to-be-adjusted focal length of the zoom lens is determined according to the first focal length and the target position, and the voltage signal or current signal applied to the zoom lens is changed according to the to-be-adjusted focal length to convert the virtual image Adjust to the target position.
- the head-mounted display device receives the to-be-adjusted focal length of the zoom lens sent by the terminal device.
- the head-mounted display device includes a display component and an optical imaging component, and the optical imaging component includes a zoom lens.
- the focal length of the zoom lens to be adjusted sent by the terminal device can be received; according to the focal length to be adjusted, the voltage signal or current signal applied to the zoom lens is changed to adjust the virtual image to the target position.
- the visual acuity parameter, the first preset scene type to which the image displayed by the display component belongs, and the corresponding relationship between the preset scene type and the position of the virtual image can be obtained; and according to the visual acuity
- the absolute value of the difference between the distance between the target position of the virtual image and the human eye and the binocular convergence depth of the human eye is smaller than a threshold.
- the threshold range is [0 diopter D, 1 diopter D].
- a working mode of the virtual image position adjustment component is determined, and the working mode includes an automatic mode and a manual mode, and the automatic mode is that the driving component adjusts the operation according to the distance to be moved or the voltage signal or the current signal.
- the virtual image is adjusted to the target position; in the manual mode, the user adjusts the virtual image to the target position by rotating the cam focusing mechanism.
- the positions of M preset scenes and virtual images corresponding to the M preset scenes are acquired, where M is an integer greater than 1;
- the distribution relationship of the positions of the virtual images corresponding to the M preset scenes respectively; according to the distribution relationship, the corresponding relationship between the preset scenes and the positions of the virtual images is determined.
- M is an integer greater than 1
- the artificial intelligence algorithm is input to the positions of the virtual images corresponding to the M preset scenes, respectively, to obtain the corresponding relationship between the preset scenes and the positions of the virtual images.
- the positions of the virtual images corresponding to the M preset scenes input by the user are received; or, the binocular disparity of the images in the M preset scenes is obtained, respectively according to the M preset scenes.
- the binocular field of view of the image determines the position of the virtual image corresponding to the M preset scenes.
- the present application provides a method for adjusting the position of a virtual image.
- the method can be applied to a terminal device.
- the method can include determining a first preset scene type to which the image belongs, and the image is used for display on a head-mounted display device; Set the corresponding relationship between the scene type and the position of the virtual image, and according to the corresponding relationship between the preset scene type and the position of the virtual image, determine the target position where the head-mounted display device corresponding to the first preset scene type presents the virtual image, and control according to the target position.
- the head-mounted display device forms a virtual image of the image at the target position, and the target position of the virtual image is related to the preset scene type to which the image belongs.
- Method 1.1 sending a first control instruction to the head-mounted display device.
- the first distance between the display component and the optical imaging component in the head-mounted display device is obtained; according to the first distance and the target position, the to-be-moved display component and/or the optical imaging component is determined distance; generate a first control command according to the distance to be moved, and send the first control command to the head-mounted display device, where the first control command is used to control the movement of the display assembly and/or the optical imaging assembly to adjust the virtual image to the target position.
- the position of the optical imaging component and/or the display component sent by the virtual image position adjustment component in the head-mounted display device may be received; and the first distance may be determined according to the position of the optical imaging component and/or the display component.
- Method 1.2 sending a second control instruction to the head-mounted display device.
- the first focal length of the optical imaging assembly in the head-mounted display device is acquired, and the focal length to be adjusted of the optical imaging assembly is determined according to the first focal length and the target position; and the second control is generated according to the focal length to be adjusted.
- the second control command is used to control the voltage signal or current signal applied to the optical imaging component, adjust the focal length of the optical imaging component, and adjust the virtual image to the target position.
- the present application provides a method for adjusting the position of a virtual image, which can be applied to a head-mounted display device.
- the method may include displaying a first interface, and when a user selects a first object in the first interface, acquiring a target position of a virtual image corresponding to the first object, where the target position of the virtual image is related to a preset scene type to which the first object belongs; The image triggered to be displayed after the first object is selected, and the image is formed into a virtual image at the target position.
- the object can be an application.
- the target positions where the virtual image is presented by the head-mounted display device are different.
- the preset scene types include office scene types, reading scene types, meeting scene types, interactive game scene types, or video scene types.
- the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component ranges from [0.1 to 10] diopter D; when the image belongs to the preset scene type When it is a reading scene type, the distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component ranges from [0.5 to 10] diopter D; when the preset scene type to which the image belongs is a conference scene type, the head-mounted display The distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component is in the range of [0.1 to 7.1] diopter D; when the preset scene type to which the image belongs is an interactive game scene type, the head-mounted display device presents the virtual image.
- the distance between the target position and the optical imaging component ranges from [0.5 to 7.5] diopter D; when the preset scene type to which the image belongs is a video scene type, the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component The distance range is [0.1 ⁇ 7] diopter D.
- the head mounted display device includes a control assembly.
- the second preset scene type to which the first object belongs and the corresponding relationship between the preset scene type and the position of the virtual image are obtained; according to the corresponding relationship between the preset scene type and the position of the virtual image, the second preset scene type is determined.
- the head mounted display device does not include the control assembly.
- the target position of the virtual image corresponding to the first object sent by the terminal device is received.
- the head-mounted display device determines the to-be-moved distance of the display component and/or the optical imaging component.
- the head-mounted display device includes a display assembly and an optical imaging assembly. Further, the first distance between the display assembly and the optical imaging assembly can be obtained, and the to-be-moved distance of the display assembly and/or the optical imaging assembly can be determined according to the first distance and the target position, and the display assembly and/or the to-be-moved distance can be driven according to the to-be-moved distance.
- the optical imaging assembly moves to adjust the virtual image to the target position.
- the head-mounted display device receives the to-be-moved distance of the display component and/or the optical imaging component sent by the terminal device.
- the head-mounted display device includes a display assembly and an optical imaging assembly. Further, optionally, the distance to be moved of the display component and/or the optical imaging component can be received from the terminal device, and the display component and/or the optical imaging component are driven to move according to the distance to be moved to adjust the virtual image to the target position.
- the head-mounted display device determines the focal length of the zoom lens to be adjusted.
- the head-mounted display device includes a display component and an optical imaging component, and the optical imaging component includes a zoom lens.
- the first focal length of the zoom lens can be determined, the to-be-adjusted focal length of the zoom lens is determined according to the first focal length and the target position, and the voltage signal or current signal applied to the zoom lens is changed according to the to-be-adjusted focal length to convert the virtual image Adjust to the target position.
- the head-mounted display device receives the to-be-adjusted focal length of the zoom lens sent by the terminal device.
- the head-mounted display device includes a display component and an optical imaging component, and the optical imaging component includes a zoom lens.
- the focal length of the zoom lens to be adjusted sent by the terminal device can be received; according to the focal length to be adjusted, the voltage signal or current signal applied to the zoom lens is changed to adjust the virtual image to the target position.
- the visual acuity parameter, the second preset scene type to which the first object belongs, and the corresponding relationship between the preset scene type and the position of the virtual image may be obtained; and according to the visual acuity parameter, and The corresponding relationship between the preset scene type and the position of the virtual image determines the target position corresponding to the second preset scene type.
- the absolute value of the difference between the distance between the target position of the virtual image and the human eye and the binocular convergence depth of the human eye is smaller than a threshold.
- the threshold range is [0 diopter D, 1 diopter D].
- the absolute value of the difference between the distance between the target position of the virtual image and the human eye and the binocular convergence depth of the human eye is smaller than a threshold.
- the threshold range is [0 diopter D, 1 diopter D].
- a working mode of the virtual image position adjustment component is determined, and the working mode includes an automatic mode and a manual mode, and the automatic mode is that the driving component adjusts the operation according to the distance to be moved or the voltage signal or the current signal.
- the virtual image is adjusted to the target position; in the manual mode, the user adjusts the virtual image to the target position by rotating the cam focusing mechanism.
- the positions of M preset scenes and virtual images corresponding to the M preset scenes are acquired, where M is an integer greater than 1;
- the distribution relationship of the positions of the virtual images corresponding to the M preset scenes respectively; according to the distribution relationship, the corresponding relationship between the preset scenes and the positions of the virtual images is determined.
- M is an integer greater than 1
- the artificial intelligence algorithm is input to the positions of the virtual images corresponding to the M preset scenes, respectively, to obtain the corresponding relationship between the preset scenes and the positions of the virtual images.
- the present application provides a method for adjusting the position of a virtual image, which can be applied to a terminal device.
- the method may include: acquiring the first object selected by the user in the first interface displayed by the head-mounted display device, the second preset scene type to which the first object belongs, and the correspondence between the preset scene type and the position of the virtual image; According to the corresponding relationship between the preset scene type and the position of the virtual image, determine the target position where the head-mounted display device corresponding to the second preset scene type presents the virtual image; according to the target position, control the head-mounted display device to form a virtual image at the target position. , the target position of the virtual image is related to the preset scene type to which the first object belongs.
- Method 2.1 sending a first control instruction to the head-mounted display device.
- the first distance between the display component and the optical imaging component in the head-mounted display device is acquired, and the to-be-moved display component and/or the optical imaging component is determined according to the first distance and the target position distance; generate a first control command according to the distance to be moved, and send the first control command to the head-mounted display device, where the first control command is used to control the movement of the display assembly and/or the optical imaging assembly to adjust the virtual image to the target position.
- the position of the optical imaging component and/or the display component sent by the virtual image position adjustment component in the head-mounted display device can be received; the first distance is determined according to the position of the optical imaging component and/or the display component.
- Method 1.2 sending a second control instruction to the head-mounted display device.
- the first focal length of the optical imaging assembly in the head-mounted display device is acquired, and the focal length to be adjusted of the optical imaging assembly is determined according to the first focal length and the target position; and the second control is generated according to the focal length to be adjusted.
- the second control command is used to control the voltage signal or current signal applied to the optical imaging component, adjust the focal length of the optical imaging component, and adjust the virtual image to the target position.
- the present application provides a method for adjusting the position of a virtual image, which is applied to a display module.
- the display module may include a display assembly, an optical imaging assembly, and a virtual image position adjustment assembly.
- the display assembly is used for displaying images, and the optical imaging assembly is used for The image is formed into a virtual image, and the virtual image position adjustment assembly is used to adjust the optical imaging assembly and/or the display assembly;
- the method may include: acquiring the image displayed by the display assembly and the target position of the virtual image corresponding to the image, the target position of the virtual image and the image to which the image belongs.
- the preset scene type is related; the virtual image position adjustment component is controlled to adjust the optical imaging component and/or the display component to form a virtual image at the target position.
- the preset scene type to which the image belongs may be the preset scene type to which the content of the image belongs; or, the preset scene type to which the object corresponding to the image belongs.
- the target positions where the virtual image is presented by the display module are different.
- the preset scene types include office scene types, reading scene types, meeting scene types, interactive game scene types, or video scene types.
- the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component ranges from [0.1 to 10] diopter D; when the image belongs to the preset scene type When it is a reading scene type, the distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component ranges from [0.5 to 10] diopter D; when the preset scene type to which the image belongs is a conference scene type, the head-mounted display The distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component is in the range of [0.1 to 7.1] diopter D; when the preset scene type to which the image belongs is an interactive game scene type, the head-mounted display device presents the virtual image.
- the distance between the target position and the optical imaging component ranges from [0.5 to 7.5] diopter D; when the preset scene type to which the image belongs is a video scene type, the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component The distance range is [0.1 ⁇ 7] diopter D.
- Manner 1 is based on that the display module includes a control component.
- control component may acquire the first preset scene type to which the image displayed by the display module belongs and the corresponding relationship between the preset scene type and the position of the virtual image, and determine the corresponding relationship between the preset scene type and the position of the virtual image according to the preset scene type and the position of the virtual image.
- the corresponding relationship of the first preset scene type is determined, and the target position corresponding to the first preset scene type is determined.
- control component may receive the first preset scene type to which the image sent by the terminal device belongs. Alternatively, the control component may also determine the first preset scene type to which the image belongs.
- the second method is based on the fact that the display module does not include the control component.
- the target position of the virtual image corresponding to the image sent by the terminal device can be received.
- the to-be-moved distance of the display component and/or the optical imaging component is determined.
- the first distance between the display component and the optical imaging component may be obtained, and according to the first distance and the target position, the to-be-moved distance of the display component and/or the optical imaging component is determined, and according to the to-be-moved distance , drive the display assembly and/or the optical imaging assembly to move, and adjust the virtual image to the target position.
- Implementation mode 2 Receive the to-be-moved distance of the display component and/or the optical imaging component sent by the terminal device.
- the distance to be moved of the display component and/or the optical imaging component sent by the terminal device may be received, and according to the distance to be moved, the display component and/or the optical imaging component may be driven to move to adjust the virtual image to the target location.
- the optical imaging component includes a zoom lens, and the focal length of the zoom lens to be adjusted is determined.
- the first focal length of the zoom lens may be determined
- the to-be-adjusted focal length of the zoom lens may be determined according to the first focal length and the target position
- the voltage signal or current signal applied to the zoom lens may be changed according to the to-be-adjusted focal length , adjust the virtual image to the target position.
- the optical imaging component includes a zoom lens, and receives the focal length of the zoom lens to be adjusted sent by the terminal device.
- the focal length of the zoom lens to be adjusted sent by the terminal device can be received, and the voltage signal or current signal applied to the zoom lens can be changed according to the focal length to be adjusted to adjust the virtual image to the target position.
- the visual acuity parameter, the first preset scene type to which the image displayed by the display component belongs, and the corresponding relationship between the preset scene type and the position of the virtual image can be obtained; and according to the visual acuity
- the absolute value of the difference between the distance between the target position of the virtual image and the human eye and the binocular convergence depth of the human eye is smaller than a threshold.
- the threshold range is [0 diopter D, 1 diopter D].
- the present application provides a method for adjusting the position of a virtual image, which is applied to a display module.
- the display module includes a display assembly, an optical imaging assembly, and a virtual image position adjustment assembly.
- the display assembly is used to display an image
- the optical imaging assembly is used to The image forms a virtual image
- the virtual image position adjustment component is used to adjust the optical imaging component and/or the display component;
- the method includes: displaying a first interface, and when the user selects the first object in the first interface, acquiring the target of the virtual image corresponding to the first object Position, the target position of the virtual image is related to the preset scene type to which the first object belongs; for the image displayed by the display component triggered after selecting the first object, the virtual image position adjustment component is controlled to adjust the optical imaging component and/or the display component, and the image is The target position forms a virtual image.
- the object can be an application.
- the target positions where the virtual image is presented by the display module are different.
- the preset scene types include office scene types, reading scene types, meeting scene types, interactive game scene types, or video scene types.
- the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component ranges from [0.1 to 10] diopter D; when the image belongs to the preset scene type When it is a reading scene type, the distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component ranges from [0.5 to 10] diopter D; when the preset scene type to which the image belongs is the conference scene type, the head-mounted display The distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component is in the range of [0.1 to 7.1] diopter D; when the preset scene type to which the image belongs is an interactive game scene type, the head-mounted display device presents the virtual image.
- the distance between the target position and the optical imaging component ranges from [0.5 to 7.5] diopter D; when the preset scene type to which the image belongs is a video scene type, the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component The distance range is [0.1 ⁇ 7] diopter D.
- the display module includes a control component.
- the second preset scene type to which the first object belongs and the corresponding relationship between the preset scene type and the position of the virtual image are obtained; according to the corresponding relationship between the preset scene type and the position of the virtual image, the second preset scene type is determined.
- the display module does not include the control component.
- the target position of the virtual image corresponding to the first object sent by the terminal device is received.
- the to-be-moved distance of the display component and/or the optical imaging component is determined.
- the first distance between the display component and the optical imaging component may be obtained, and according to the first distance and the target position, the to-be-moved distance of the display component and/or the optical imaging component is determined, and according to the to-be-moved distance , drive the display assembly and/or the optical imaging assembly to move, and adjust the virtual image to the target position.
- the to-be-moved distance of the display component and/or the optical imaging component sent by the terminal device can be received.
- the distance to be moved of the display component and/or the optical imaging component may be received from the terminal device, and the display component and/or the optical imaging component may be driven to move according to the distance to be moved to adjust the virtual image to target location.
- the optical imaging component includes a zoom lens, and the focal length of the zoom lens to be adjusted is determined.
- the first focal length of the zoom lens may be determined
- the to-be-adjusted focal length of the zoom lens may be determined according to the first focal length and the target position
- the voltage signal or current signal applied to the zoom lens may be changed according to the to-be-adjusted focal length , adjust the virtual image to the target position.
- the optical imaging component includes a zoom lens, which can receive the focal length of the zoom lens to be adjusted sent by the terminal device.
- the focal length of the zoom lens to be adjusted sent by the terminal device can be received; according to the focal length to be adjusted, the voltage signal or current signal applied to the zoom lens is changed to adjust the virtual image to the target position.
- the visual acuity parameter, the second preset scene type to which the first object belongs, and the corresponding relationship between the preset scene type and the position of the virtual image may be obtained; and according to the visual acuity parameter, and The corresponding relationship between the preset scene type and the position of the virtual image determines the target position corresponding to the second preset scene type.
- the absolute value of the difference between the distance between the target position of the virtual image and the human eye and the binocular convergence depth of the human eye is smaller than a threshold.
- the threshold range is [0 diopter D, 1 diopter D].
- the present application provides a device for adjusting the position of a virtual image.
- the device for adjusting the position of a virtual image is used to implement any one of the above-mentioned second aspect or the second aspect, and includes corresponding functional modules, which are respectively used to implement the above-mentioned second aspect. steps in the method.
- the functions can be implemented by hardware, or by executing corresponding software by hardware.
- the hardware or software includes one or more modules corresponding to the above functions.
- the device for adjusting the position of the virtual image may be applied to a head-mounted display device, and may include an acquisition module and a virtual image formation module.
- the acquisition module is used to acquire the image displayed by the head-mounted display device and the target position of the virtual image corresponding to the image
- the virtual image forming module is used to form a virtual image at the target position.
- the target position of the virtual image corresponds to the preset scene type to which the image belongs. related.
- the virtual image position adjusting device presents different target positions of the virtual image.
- the preset scene types include office scene types, reading scene types, meeting scene types, interactive game scene types, or video scene types.
- the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component ranges from [0.1 to 10] diopter D; when the image belongs to the preset scene type When it is a reading scene type, the distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component ranges from [0.5 to 10] diopter D; when the preset scene type to which the image belongs is the conference scene type, the head-mounted display The distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component is in the range of [0.1 to 7.1] diopter D; when the preset scene type to which the image belongs is an interactive game scene type, the head-mounted display device presents the virtual image.
- the distance between the target position and the optical imaging component ranges from [0.5 to 7.5] diopter D; when the preset scene type to which the image belongs is a video scene type, the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component The distance range is [0.1 ⁇ 7] diopter D.
- the preset scene type to which the image belongs includes any one of the following: the preset scene type to which the content of the image belongs; or the preset scene type to which the object corresponding to the image belongs.
- the obtaining module is configured to obtain the first preset scene type to which the image displayed by the head-mounted display device belongs; obtain the correspondence between the preset scene type and the position of the virtual image; The correspondence between the positions of the virtual images determines the target position corresponding to the first preset scene type.
- the acquisition module is configured to receive the first preset scene type to which the image sent by the terminal device belongs; or, determine the first preset scene type to which the image belongs.
- the obtaining module is configured to: receive the target position of the virtual image corresponding to the image sent by the terminal device.
- the acquisition module is configured to acquire the first distance between the display component and the optical imaging component in the head-mounted display device; and determine the display component and/or the optical imaging component according to the first distance and the target position The distance to be moved of the component; the virtual image forming module is used to drive the display component and/or the optical imaging component included in the position adjustment device of the virtual image to move according to the distance to be moved to adjust the virtual image to the target position.
- the acquiring module is used to receive the distance to be moved of the display component and/or the optical imaging component in the head-mounted display device sent by the terminal device; the virtual image forming module is used to, according to the distance to be moved, Drive the display assembly and/or the optical imaging assembly to move to adjust the virtual image to the target position.
- the acquisition module is used to determine the first focal length of the zoom lens in the head-mounted display device; the focal length to be adjusted of the zoom lens is determined according to the first focal length and the target position; the virtual image forming module is used to determine the focal length of the zoom lens according to the To adjust the focal length, change the voltage signal or current signal applied to the zoom lens to adjust the virtual image to the target position.
- the acquisition module is used to receive the focal length to be adjusted of the zoom lens in the head mounted display device sent by the terminal device;
- the virtual image forming module is used to change the voltage signal applied to the zoom lens according to the focal length to be adjusted or current signal to adjust the virtual image to the target position.
- the acquisition module is configured to acquire the vision parameter, the first preset scene type to which the image displayed by the head-mounted display device belongs, and the corresponding relationship between the preset scene type and the position of the virtual image; and the corresponding relationship between the preset scene type and the position of the virtual image, to determine the target position corresponding to the first preset scene type.
- the absolute value of the difference between the distance between the target position of the virtual image and the human eye and the binocular convergence depth of the human eye is smaller than a threshold.
- the threshold range is [0 diopter D, 1 diopter D].
- the present application provides a position adjustment device for a virtual image
- the position adjustment device for a virtual image is used to implement any one of the third aspect or the third aspect, and includes corresponding functional modules, which are respectively used to implement the above steps in the method.
- the functions can be implemented by hardware, or by executing corresponding software by hardware.
- the hardware or software includes one or more modules corresponding to the above functions.
- the apparatus for adjusting the position of the virtual image may be applied to a terminal device, and the apparatus for adjusting the position of the virtual image may include a determination module, an acquisition module and a control module.
- the determining module is used to determine the first preset scene type to which the image belongs, and the image is used for display on the head-mounted display device;
- the obtaining module is used to obtain the correspondence between the preset scene type and the position of the virtual image;
- the corresponding relationship between the preset scene type and the position of the virtual image determining the target position where the head-mounted display device corresponding to the first preset scene type presents the virtual image, and the target position of the virtual image is related to the preset scene type to which the image belongs;
- the control module is used for According to the target position, the head mounted display device is controlled to form a virtual image at the target position.
- the acquisition module is used to acquire the first distance between the display component and the optical imaging component in the head-mounted display device;
- the determination module is used to determine the display component and the target position according to the first distance and the target position. /or the to-be-moved distance of the optical imaging assembly;
- the control module is configured to generate a first control instruction according to the to-be-moved distance, and send the first control instruction to the head-mounted display device, where the first control instruction is used to control the display assembly and/or the optical imaging assembly
- the imaging assembly moves to adjust the virtual image to the target position.
- the acquiring module is configured to receive the position of the optical imaging assembly and/or the display assembly sent by the head-mounted display device; the determining module is configured to determine the first position according to the position of the optical imaging assembly and/or the display assembly a distance.
- the acquisition module is used to acquire the first focal length of the zoom lens in the head-mounted display device;
- the determination module is used to determine the to-be-adjusted focal length of the zoom lens according to the first focal length and the target position;
- the control module It is used to generate a second control instruction according to the focal length to be adjusted, and send the second control instruction to the head-mounted display device.
- the second control instruction is used to control the voltage signal or current signal applied to the zoom lens, adjust the focal length of the zoom lens, and set the The virtual image is adjusted to the target position.
- the present application provides a position adjustment device for a virtual image.
- the position adjustment device for a virtual image is used to implement the fourth aspect or any one of the methods in the fourth aspect, and includes corresponding functional modules, which are respectively used to implement the above steps in the method.
- the functions can be implemented by hardware, or by executing corresponding software by hardware.
- the hardware or software includes one or more modules corresponding to the above functions.
- the device for adjusting the position of the virtual image may be applied to a head-mounted display device, and the device for adjusting the position of the virtual image may include a display module, an acquisition module and a virtual image forming module; the display module is used to display the first interface , when the user selects the first object in the first interface, the acquisition module is used to acquire the target position of the virtual image corresponding to the first object, and the target position of the virtual image is related to the preset scene type to which the first object belongs; for selecting the first object After triggering the displayed image, the virtual image forming module is used to form a virtual image at the target position.
- the display module is used to display the first interface , when the user selects the first object in the first interface, the acquisition module is used to acquire the target position of the virtual image corresponding to the first object, and the target position of the virtual image is related to the preset scene type to which the first object belongs; for selecting the first object After triggering the displayed image, the virtual image forming module is used to form a virtual image at the target
- the target positions where the virtual image is presented by the head-mounted display device are different.
- the preset scene types include office scene types, reading scene types, meeting scene types, interactive game scene types, or video scene types.
- the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component ranges from [0.1 to 10] diopter D; when the image belongs to the preset scene type When it is a reading scene type, the distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component ranges from [0.5 to 10] diopter D; when the preset scene type to which the image belongs is the conference scene type, the head-mounted display The distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component is in the range of [0.1 to 7.1] diopter D; when the preset scene type to which the image belongs is an interactive game scene type, the head-mounted display device presents the virtual image.
- the distance between the target position and the optical imaging component ranges from [0.5 to 7.5] diopter D; when the preset scene type to which the image belongs is a video scene type, the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component The distance range is [0.1 ⁇ 7] diopter D.
- the first object is an application.
- the obtaining module is configured to obtain the second preset scene type to which the first object belongs; and the correspondence between the preset scene type and the position of the virtual image; according to the correspondence between the preset scene type and the position of the virtual image relationship, and determine the target position corresponding to the second preset scene type.
- the obtaining module is configured to receive the target position of the virtual image corresponding to the first object sent by the terminal device.
- the acquisition module is configured to acquire the first distance between the display component and the optical imaging component in the head-mounted display device; and determine the display component and/or the optical imaging component according to the first distance and the target position The distance to be moved of the component; the virtual image forming module is used to drive the display component and/or the optical imaging component to move according to the distance to be moved, and adjust the virtual image to the target position.
- the acquiring module is used to receive the distance to be moved of the display component and/or the optical imaging component in the head-mounted display device sent by the terminal device; the virtual image forming module is used to, according to the distance to be moved, Drive the display assembly and/or the optical imaging assembly to move to adjust the virtual image to the target position.
- the acquisition module is used to determine the first focal length of the zoom lens in the head-mounted display device; the focal length to be adjusted of the zoom lens is determined according to the first focal length and the target position; the virtual image forming module is used to determine the focal length of the zoom lens according to the To adjust the focal length, change the voltage signal or current signal applied to the zoom lens to adjust the virtual image to the target position.
- the acquisition module is used to receive the focal length to be adjusted of the zoom lens in the head mounted display device sent by the terminal device;
- the virtual image forming module is used to change the voltage signal applied to the zoom lens according to the focal length to be adjusted or current signal to adjust the virtual image to the target position.
- the obtaining module is configured to obtain the vision parameter, the second preset scene type to which the first object belongs, and the corresponding relationship between the preset scene type and the position of the virtual image; The corresponding relationship between the scene type and the position of the virtual image determines the target position corresponding to the second preset scene type.
- the present application provides a device for adjusting the position of a virtual image
- the device for adjusting the position of a virtual image is used to implement any one of the fifth aspect or the method in the fifth aspect, and includes corresponding functional modules, which are respectively used to implement steps in the above method.
- the functions can be implemented by hardware, or by executing corresponding software by hardware.
- the hardware or software includes one or more modules corresponding to the above functions.
- the device for adjusting the position of the virtual image may be a terminal device, and may include an acquisition module, a determination module, and a control module; the acquisition module is used to acquire the user's selection in the first interface displayed by the head-mounted display device the first object, the second preset scene type to which the first object belongs, and the corresponding relationship between the preset scene type and the position of the virtual image; the determining module is used to determine the second preset scene type according to the corresponding relationship between the preset scene type and the position of the virtual image The head-mounted display device corresponding to the preset scene type presents the target position of the virtual image, and the target position of the virtual image is related to the preset scene type to which the first object belongs; the control module is used for controlling the head-mounted display device to display the image in the target position according to the target position.
- the target position forms a virtual image.
- the acquisition module is used to acquire the first distance between the display component and the optical imaging component in the head-mounted display device;
- the determination module is used to determine the display component and the target position according to the first distance and the target position. /or the to-be-moved distance of the optical imaging assembly;
- the control module is used to: generate a first control command according to the to-be-moved distance, and send the first control command to the head-mounted display device, where the first control command is used to control the display assembly and/or The optical imaging assembly moves to adjust the virtual image to the target position.
- the acquiring module is configured to receive the position of the optical imaging assembly and/or the display assembly sent by the head-mounted display device; the determining module is configured to determine the first position according to the position of the optical imaging assembly and/or the display assembly a distance.
- the acquisition module is used to acquire the first focal length of the zoom lens in the head-mounted display device;
- the determination module is used to determine the to-be-adjusted focal length of the zoom lens according to the first focal length and the target position;
- the control module It is used to generate a second control instruction according to the focal length to be adjusted, and send the second control instruction to the head-mounted display device.
- the second control instruction is used to control the voltage signal or current signal applied to the zoom lens, adjust the focal length of the zoom lens, and set the The virtual image is adjusted to the target position.
- the present application provides a computer-readable storage medium, in which a computer program or instruction is stored, and when the computer program or instruction is executed by a head-mounted display device, the head-mounted display device causes the Perform the method in the above second aspect or any possible implementation manner of the second aspect; or cause the head-mounted display device to perform the above fourth aspect or the method in any possible implementation manner of the fourth aspect.
- the present application provides a computer-readable storage medium, where a computer program or instruction is stored, and when the computer program or instruction is executed by a terminal device, the terminal device is made to perform the above-mentioned third aspect or The method in any possible implementation manner of the third aspect; or the terminal device is caused to execute the above fifth aspect or the method in any possible implementation manner of the fifth aspect.
- the present application provides a computer program product, the computer program product includes a computer program or an instruction, when the computer program or instruction is executed by a terminal device, the second aspect or any possible implementation of the second aspect can be realized A method in a manner, or a method for implementing the fourth aspect or any possible implementation manner of the fourth aspect.
- the present application provides a computer program product, the computer program product includes a computer program or an instruction, when the computer program or instruction is executed by a terminal device, realizes the third aspect or any possible implementation of the third aspect A method in a manner, or a method for implementing the fifth aspect or any possible implementation manner of the fifth aspect.
- 1a is a schematic diagram of a relationship between an object distance and an image distance provided by the application
- 1b is a schematic diagram of an optical path diagram of a triangulation ranging lidar provided by the application;
- Figure 1c is a schematic diagram of the principle of a convergence adjustment conflict provided by the application.
- 2a is a schematic diagram of an application scenario provided by the application
- 2b is a schematic diagram of the target position relationship between an application scene and a virtual image provided by the application
- FIG. 3 is a schematic structural diagram of a display module provided by the application.
- 4a is a schematic diagram of a first interface provided by the application.
- 4b is a schematic diagram of a setting interface of an application scenario provided by the application.
- 4c is a schematic diagram of a third interface provided by the application.
- 4d is a schematic diagram of a second interface provided by the application.
- 4e is a schematic diagram of another second interface provided by the application.
- 4f is a schematic diagram of a fourth interface provided by the application.
- FIG. 5 is a schematic diagram of a retaining ring fixing the first lens provided by the application.
- 6a is a schematic structural diagram of an optical imaging assembly provided by the application.
- 6b is a schematic diagram of the optical path of an optical imaging assembly provided by the application.
- 6c is a schematic diagram of a lens barrel fixing optical imaging assembly provided by the application.
- 6d is a schematic structural diagram of a semi-transparent mirror provided by the application.
- FIG. 7 is a schematic structural diagram of another optical imaging assembly provided by the application.
- FIG. 8 is a schematic structural diagram of another optical imaging assembly provided by the application.
- FIG. 9 is a schematic structural diagram of another optical imaging assembly provided by the application.
- FIG. 10 is a schematic structural diagram of another optical imaging assembly provided by the application.
- FIG. 11 is a schematic structural diagram of another optical imaging assembly provided by the application.
- 12a is a schematic structural diagram of a liquid crystal lens provided by the application.
- 12b is a schematic structural diagram of a liquid crystal lens provided by the application.
- 12c is a schematic structural diagram of a liquid crystal lens provided by the application.
- 13a is a schematic diagram of changing the polarization state of incident light provided by the application.
- 13b is a schematic structural diagram of an electrically controlled twisted liquid crystal changing the polarization state of incident light provided by the application;
- 14a is a schematic structural diagram of a liquid lens provided by the application.
- 14b is a schematic structural diagram of a liquid lens provided by the application.
- 15 is a schematic structural diagram of a deformable mirror provided by the application.
- 16a is a schematic diagram of a mobile display assembly provided by the application and the optical imaging assembly does not move;
- FIG. 16b is a schematic diagram of a display assembly that does not move and moves an optical imaging assembly provided by the application;
- 16c is a schematic diagram of a mobile display assembly and a mobile optical imaging assembly provided by the present application.
- 17a is a schematic structural diagram of a display module provided by the application.
- 17b is a schematic structural diagram of a display module provided by the application.
- 17c is a schematic structural diagram of a display module provided by the application.
- 17d is a schematic diagram of the relationship between the moving distance of an optical imaging assembly and the moving distance of a virtual image provided by the application;
- 18a is a schematic structural diagram of a first knob provided by the application.
- 18b is a schematic diagram of the structure of a cam focusing mechanism provided by the application.
- Figure 18c is a schematic structural diagram of a second knob provided by the application.
- 19 is a schematic structural diagram of another display assembly provided by the application.
- 20 is a schematic flowchart of another virtual image position adjustment method provided by the application.
- 21 is a schematic flowchart of another virtual image position adjustment method provided by the application.
- 22 is a schematic flowchart of another virtual image position adjustment method provided by the application.
- 24 is a schematic flowchart of another virtual image position adjustment method provided by the application.
- 25 is a schematic flowchart of another virtual image position adjustment method provided by the application.
- 26 is a schematic flowchart of another virtual image position adjustment method provided by the application.
- FIG. 27 is a schematic structural diagram of a virtual image position adjustment device provided by the application.
- FIG. 28 is a schematic structural diagram of a device for adjusting the position of a virtual image provided by the application.
- 29 is a schematic structural diagram of a terminal device provided by the application.
- FIG. 30 is a schematic structural diagram of a terminal device provided by this application.
- Displaying close to the eyes is a display method for AR display devices or VR display devices.
- the optical path changes.
- the image formed by the intersection of the reverse extension lines is a virtual image, where the virtual image is located.
- the position of the virtual image is called the position of the virtual image
- the plane where the virtual image is located is called the virtual image plane.
- the distance between the position of the virtual image and the human eye is the depth of focus. It should be understood that there is no actual object where the virtual image is located, and no light is converged.
- images formed by plane mirrors and glasses are virtual images.
- the far and near positions of the virtual object that is, the virtual image
- the virtual object that is, the virtual image
- Adaptive focal plane display refers to the process of refraction adjustment and binocular vergence adjustment that can automatically simulate the human eye when observing different objects near and far.
- Eye tracking refers to the tracking of eye movement by measuring the position of the gaze point of the eye or the movement of the eye relative to the head.
- An eye tracking device is a device that can track and measure eye position and eye movement information.
- Presbyopia refers to the gradual hardening and thickening of the eye lens, and the ability to adjust the eye muscles also decreases, resulting in reduced zoom ability. Usually presbyopia is up to 300 to 350 degrees.
- Astigmatism is a refractive error of the eye related to the curvature of the cornea.
- the curvature of the cornea in a certain angle area is more curved, while other angle areas are flatter, not a circularly symmetrical surface.
- the transflective mirror can also be called a beam splitter, a beam splitter, or a half mirror. It is an optical element that coats a semi-reflective film on the optical glass, or coats a semi-transparent and semi-reflective film on one optical surface of the lens to change the ratio of the original transmission and reflection of the incident beam. By coating the film layer, the reflection can be enhanced to increase the light intensity; it can also be enhanced to reduce the light intensity.
- a half mirror can transmit and reflect incident light in a ratio of 50:50. That is, the transmittance and reflectance of the half mirror are each 50%. When the incident light passes through the half mirror, the transmitted light intensity and the reflected light intensity each account for 50%.
- the reflectivity and transmittance can be selected according to specific requirements, for example, the reflectivity can be higher than 50% and the transmittance can be lower than 50%; or the reflectivity can be lower than 50% and the transmittance can be lower than 50%.
- the optical power is equal to the difference between the convergence of the image-side beam and the object-side beam convergence, and it characterizes the ability of the optical system to deflect light.
- the general optical power is expressed as the reciprocal of the image-side focal length (approximately considered that the refractive index of air is 1).
- power of glasses diopter x 100.
- 1/4 wave plate is a birefringent optical device, including two optical axes, fast axis and slow axis, which can be used to make linearly polarized light along the fast axis and slow axis pass through the 1/4 wave plate to generate ⁇ /2 phase difference.
- Reflective polarizers can be used to transmit light of one polarization state and reflect light of another polarization state.
- it may be a polarizer of a multilayer dielectric film or a polarizer of a wire grid.
- VAC vergence and accommodation conflict
- the distance between the center of the optical imaging component and the center of the display screen is called the object distance p
- the distance between the center of the imaging lens group and the virtual image is called the image distance q
- the equivalent focal length of the optical imaging component f where the following formula (1) is satisfied between the object distance p, the image distance q and the equivalent focal length f.
- the triangular ranging lidar is based on the triangle formed by the outgoing path and the reflected path of the measured light, and uses the triangular formula to deduce the distance of the measured target. Its working principle is: a laser signal is sent out through a laser transmitter, and the laser is reflected by the measured target and then received and imaged by a laser receiver in a position sensor (such as a charge-coupled device (CCD)). The transmitter and receiver are separated by a distance, so according to the optical path, objects with different distances will be imaged at different positions on the CCD, and then calculate according to the triangular formula to deduce the distance of the measured target, see Figure 1b below.
- a position sensor such as a charge-coupled device (CCD)
- the laser beam emitted by the laser 1 is focused on the measured target 6 by the lens 2, and the reflected light reflected by the measured target 6 is focused on the CCD array 4 by the lens 3, and the signal processor 5 calculates the light spot on the CCD array 4 through the trigonometric function.
- the displacement size of the measured target can thus be obtained.
- the vergence accommodation conflict is due to the fact that when the human eye observes three-dimensional (3D) content, the correct binocular focal depth of the lens is always fixed on the screen, while binocular vergence converges on the target distance defined by parallax, which may be located at In front of the screen, and possibly behind the screen, the vergence adjustment conflicts due to the mismatch between the depth of focus and the depth of vergence.
- 3D three-dimensional
- the display module can be applied to a near eye display (NED) device, such as VR glasses, or a VR helmet.
- NED near eye display
- the user wears the NED device (see Fig. 2a) to play games, watch movies (or TV series), participate in virtual conferences, participate in video education, or video shopping.
- the target position of the virtual image may be different.
- the target position of the virtual image corresponding to the preset scene type 1 is position 1
- the target position of the virtual image corresponding to the preset scene type 2 is position 2
- the target position of the virtual image corresponding to the preset scene type 3 is position 3 .
- the focal depth of the human eye is basically the same as the binocular vergence depth, which helps to reduce the vergence adjustment conflict. That is to say, in order to minimize the vergence adjustment conflict, it is necessary to adjust the position of the virtual image. It can also be understood that in different preset scenarios, it is a multi-focal plane display.
- the present application provides a display module, which can precisely adjust the position of the virtual image, so that the virtual image is formed at the target position, thereby helping to reduce the convergence adjustment conflict.
- the display assembly proposed by the present application will be described in detail below with reference to FIG. 3 to FIG. 19 .
- the display module may include a display component 301, an optical imaging component 302 and a virtual image position adjustment component 303; the display component 301 is used to display an image; the optical imaging component 302 is used to form the image into a virtual image; the virtual image position adjustment component 303 is used for For adjusting at least one of the optical imaging component 302 and the display component 301, the virtual image is adjusted to a target position, and the target position of the virtual image is related to the preset scene type to which the image belongs.
- the preset scene type to which the image belongs may be the preset scene type to which the content of the image belongs. That is, the position of the virtual image may be set for the preset scene type to which the image content belongs.
- the preset scene type to which the image belongs may also be the preset scene type to which the object corresponding to the image belongs, and the application corresponding to the image can be understood as the image displayed by entering the application. Further, different virtual image positions can also be set for different image contents of the same object. It can also be understood that after selecting an object and displaying the image content of the object, the preset scene type to which the image content belongs can be further determined. For example, after a game application is selected, preset scene types to which different image contents belong are set in the game application. Therefore, the preset scene types to which the image contents belong after entering the game application can be further determined.
- the optical imaging component and/or the display component can be adjusted by the virtual image position adjustment component, so that the virtual images under different preset scene types can be accurately adjusted to the corresponding target positions, so that the user can clearly see the display module display Image.
- the position of the virtual image is automatically adjusted based on different preset scene types (that is, the display module can perform adaptive focal plane display), thereby helping to reduce vergence adjustment conflicts.
- the display module when the images belong to different preset scene types, presents different target positions of the virtual images. It should be understood that when the images belong to different preset scene types, the target positions where the virtual images are presented by the display module may also be the same.
- the preset scene type is, for example, an office scene type, a reading scene type, a meeting scene type, an interactive game scene type or a video scene type.
- the distance range between the target position where the head-mounted display device presents the virtual image and the optical imaging component is [0.1-10] diopter D
- the distance range between the target position where the head-mounted display device presents the virtual image and the optical imaging component is [0.5-10] diopter D
- the preset scene type is any When the type of the conference scene is selected, the distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component is 0.1-7.1D
- the preset scene type is the type of the interactive game scene
- the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component is 0.5-7.5 diopters D
- the preset scene type is the video scene type
- the preset scene types may be pre-divided according to certain rules.
- the content of some images can be classified into one type of preset scenes according to the rules; or, some objects (such as applications) can also be classified into one type of preset scenes according to the rules, such as and and other applications can be classified as video scene types, and Such applications can be classified as shopping scene types.
- the threshold value when the virtual image is at the target position, the absolute value of the difference between the depth of focus of the virtual image at the target position and the depth of vergence is smaller than the threshold value. It can also be understood that the absolute value of the difference between the distance between the target position of the virtual image and the human eye and the binocular convergence depth of the human eye is smaller than the threshold. Further, optionally, the threshold range is [0 diopter D, 1 diopter D]. It should be understood that the threshold may be determined according to the tolerance of the human eye to VAC.
- FIG. 3 Each functional component and structure shown in FIG. 3 will be introduced and described below to give an exemplary specific implementation solution.
- the following display components, optical imaging components, and virtual image position adjustment components are not marked.
- the display component as an image source, can provide display content for the display module, for example, can provide display 3D content, interactive images, and the like. That is, the display assembly can spatially intensity modulate incident light to generate light that carries image information.
- the light carrying the image information can be propagated (eg refracted) to the human eye for imaging through the optical imaging component; when the human eye sees the refracted light, it will feel that the light comes from the position of the intersection of its reverse extension lines, and the reverse extension lines intersect to form
- the image is a virtual image.
- the display component may be a liquid crystal display (LCD), or an organic light emitting diode (OLED), or a micro light emitting diode (micro-LED), or an active matrix Organic light emitting diodes or active matrix organic light emitting diodes (active-matrix organic light emitting diodes, AMOLED), or flexible light emitting diodes (flex light-emitting diodes, FLED), or quantum dot light emitting diodes (quantum dot light emitting diodes) , QLED).
- OLED has high luminous efficiency and high contrast ratio; mini-LED display has high luminous brightness, which can be applied to scenes that require strong luminous brightness.
- the display assembly may also be a reflective display screen.
- a liquid crystal on silicon (LCOS) display screen or a reflective display screen based on a digital micro-mirror device (DMD).
- LCOS and DMD have higher resolution or aperture ratio because they are reflective structures.
- the display component may also be used to display a first interface, and the first interface may include multiple objects. Further, optionally, the objects include but are not limited to applications.
- Fig. 4a is a schematic diagram of a first interface exemplarily shown in this application.
- the multiple objects displayed on the first interface 400 are the icons of four applications of video, conference, web page and game as examples.
- the first interface can also be an Android system desktop launcher (Launcher) interface.
- the first interface 400 may further include a cursor for selecting an object, see FIG. 4a above.
- the user can select objects by operating the cursor.
- the cursor can be moved to the first object to be selected, and the touch handle or other independent keys can be clicked (or double-clicked) to select the first object.
- the virtual image position adjustment component can be triggered to adjust the position of the virtual image.
- the object can also be selected in other ways.
- it may be in response to a user's quick gesture operation (for example, three-finger swipe up, two consecutive knuckle taps on the display screen, etc.), or operations such as voice commands, which are not limited in this application.
- the display module After the display module detects that the first object is selected, it needs to further acquire the target position corresponding to the first object.
- Three implementations of determining the target position are exemplarily shown as follows. It should be noted that these three implementation manners may be executed by the control component.
- Implementation manner 1 According to the acquired correspondence between the preset scene type and the position of the virtual image, the target position corresponding to the preset scene type to which the first object belongs is determined.
- different preset scenes have suitable virtual image positions (ie, target positions).
- target positions ie, target positions.
- the human eye can clearly see the image displayed by the display module.
- the positions of the M preset scene types and the respective virtual images corresponding to the M preset scene types can be obtained;
- the distribution relationship of the positions; according to the distribution relationship, the corresponding relationship between the preset scene type and the position of the virtual image is determined, and M is an integer greater than 1.
- the distribution relationship may obey a Gaussian distribution, and the target position of the virtual image may be an expected value of the Gaussian distribution.
- the positions of the M preset scene types and the virtual images corresponding to the M preset scene types can be obtained; the virtual images corresponding to the M preset scene types and the M preset scene types respectively are obtained.
- the position of the input artificial intelligence algorithm, so that the corresponding relationship between the preset scene type and the position of the virtual image can be obtained.
- the positions of the virtual images corresponding to the M preset scenes input by the user may be received.
- the binocular disparity of the images in the M preset scenes is obtained, and the positions of the virtual images corresponding to the M preset scenes are determined respectively according to the binocular disparity of the images in the M preset scenes. For example, according to the position of the same element in the content of the two images, the depth of the image is calculated to determine the position of the virtual image.
- the corresponding relationship between the preset scene type and the position of the virtual image can be obtained by a developer or a display module manufacturer. It can also be understood that the corresponding relationship between the preset scene type and the position of the virtual image may be set by the developer or the manufacturer of the display module.
- the acquired correspondence between the preset scene type and the position of the virtual image may be pre-stored in the display module or a memory outside the display module. It should be understood that the corresponding relationship may be stored in the form of a table.
- Table 1 exemplarily shows the correspondence between a preset scene type and the position of the virtual image.
- the target distance range of the virtual image in Table 1 refers to the distance range between the position where the head-mounted display device presents the virtual image and the optical imaging component
- the optimal target distance for the virtual image refers to the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component. The optimal distance between the optical imaging components.
- Preset scene type The target distance range of the virtual image Optimum target distance for virtual images office [0.1 ⁇ 10] Diopter D 1D (ie 1m) read [0.5 ⁇ 10] Diopter D 2D (ie 0.5m) Meeting [0.1 ⁇ 7.1] Diopter D 0.583D (ie 1.714m) Interactive game [0.5 ⁇ 7.5] Diopter D 1D (ie 1m) Video/Music/Live [0.1 ⁇ 7] Diopter D 0.5D (ie 2m)
- the virtual image target distance range of the office preset scene type is [0.1 ⁇ 10] diopter D, and the optimal target distance is 1D (ie 1m); the virtual image target distance range of the reading preset scene type is [0.5 ⁇ 10] 10] Diopter D, the optimal target distance is 2D (i.e. 1.714m); the virtual image target distance range of the conference preset scene type is [0.1 ⁇ 7.1] diopter D, and the optimal target distance is 0.583D (i.e.
- the virtual image target distance range of the game preset scene type is [0.5 ⁇ 7.5] diopter D, and the optimal target distance is 1D (ie 1m); the virtual image target distance range of preset scene types such as video/music/live broadcast is [0.1 ⁇ 7 ] Diopter D, the optimal target distance is 0.5D (ie 2m). It can also be understood that different preset scene types have suitable position ranges for virtual images.
- the distance range between the target position where the head-mounted display device presents the virtual image and the optical imaging component is [0.1 ⁇ 3.0] diopter D; or, when the preset scene type to which the image belongs is the interactive game scene type, the distance between the target position where the head-mounted display device presents the virtual image and the optical imaging component The range is [3.0-5.0] diopter D; or, when the preset scene type to which the image belongs is the video scene type, the distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component For (5.0 ⁇ 7] diopter D.
- the distance between the target position of the virtual image corresponding to the video scene type and the optical imaging component is greater than the distance between the target position of the virtual image corresponding to the conference scene type and the optical imaging component; and/or, the conference The distance between the target position of the virtual image corresponding to the scene type and the optical imaging assembly is greater than the distance between the target position of the virtual image corresponding to the reading scene type and the optical imaging assembly.
- the distance between the target position of the virtual image corresponding to the video scene type and the optical imaging assembly is greater than the distance between the target position of the virtual image corresponding to the conference scene type and the optical imaging assembly; and/or, The distance between the target position of the virtual image corresponding to the conference scene type and the optical imaging component is greater than the distance between the target position of the virtual image corresponding to the office scene type and the optical imaging component; and/or, the target position of the virtual image corresponding to the office scene type is the same as The distance between the optical imaging components is greater than the distance between the target position of the virtual image corresponding to the reading scene type and the optical imaging components.
- the distance between the target position of the virtual image corresponding to the office scene type and the optical imaging component is relatively close to the distance between the target position of the virtual image corresponding to the interactive game scene type and the optical imaging component.
- the user defines the target position of the virtual image corresponding to the first object.
- the user may input the user-defined target position of the virtual image through an interactive manner such as voice or virtual keys. It can also be understood that after selecting the first object, the user also needs to input a user-defined target position of the virtual image corresponding to the first object.
- the user can enter the setting interface 500 of the first object, please refer to FIG. 4b. , minimum resolution, etc.) to set.
- the user can select the feature of “best depth (that is, the target position of the virtual image)” in the setting interface 500, and then a dialog box for inputting the target position of the virtual image pops up, and the user can input the virtual keyboard or voice in the pop-up dialog box. Customize the target position of the virtual image and confirm it.
- the user can select the “best depth (ie target position of the virtual image)” feature in the setting interface 500 to enter the second interface 600 (refer to FIG. 4c ), where the user can use the virtual keyboard or voice in the second interface 600 Enter the target position of the customized virtual image in other ways, and confirm.
- the target position of the virtual image corresponding to the first object is determined according to the eye tracking component.
- the display module may further include an eye-tracking component; the eye-tracking component is used for determining the convergence depth of binocular gaze at the image; the virtual image position adjustment component is used for The convergence depth drives the imaging optical assembly and/or the display assembly to move, and adjusts the virtual image to the target position.
- the eye tracking component can be used to determine the convergence depth of the image triggered to display after binocular gaze selection of the first object, and the position of the convergence depth can be determined as the target position of the virtual image.
- the display component can also be used to display a third interface 700, and the third interface 700 can be used to input binocular vision parameters.
- FIG. 4d is an exemplary schematic diagram illustrating a third interface of the present application.
- the third interface 700 takes as an example that the binocular vision parameter is the degree of myopia.
- the third interface 700 can display an option box for left eye myopia degree and an option box for right eye myopia degree, and binocular vision parameters can be selected by pulling down the option box of left eye myopia degree and the option box of right eye myopia degree.
- the third interface 700 may display a virtual keyboard, an input box for the degree of myopia for the left eye, and an input box for the degree of myopia for the right eye.
- the user can input the visual acuity parameter of the left eye in the visual acuity box of the left eye through the virtual keyboard, and input the visual acuity parameter of the right eye in the visual acuity box of the right eye through the virtual keyboard.
- the display module can trigger the virtual image position adjustment component to adjust the position of the virtual image accordingly.
- the display module may correspondingly determine the position of the virtual image according to the corresponding relationship between the visual acuity parameter and the position of the virtual image.
- the correspondence between the binocular vision parameters and the position of the virtual image can be pre-stored in the memory. For example, binocular 300 corresponds to the position of a virtual image, binocular 350 degrees corresponds to the position of another virtual image, left eye 300 degrees and right eye 400 Degree corresponds to another position.
- the display component may first display the fourth interface 800, and the fourth interface 800 may include a selection box of the vision parameter type, as shown in FIG. 4f, wherein the vision parameter type includes but does not Limited to myopia, astigmatism, presbyopia, or hyperopia. Users can select the type of vision parameters they need. In addition, vision parameters are usually set when the display module is used for the first time.
- the display component displays the image
- it is also necessary to perform a rendering operation on the screen for example, the rendering operation may be performed by the control component.
- the optical imaging component can be used to form a virtual image on the image displayed by the display component in a virtual space, and project the image displayed by the display component to the human eye.
- the optical imaging component is the first lens.
- the first lens may be a single-piece spherical lens or aspherical lens, or may be a combination of multiple spherical or aspherical lenses.
- the combination of multiple spherical or aspherical lenses can improve the imaging quality of the system and reduce the aberration of the system.
- the spherical lens and the aspherical lens can be Fresnel lenses, and the Fresnel lenses can reduce the volume and quality of the mold assembly.
- the material of the spherical lens or the aspherical lens can be glass or resin, the resin material can reduce the quality of the mold assembly, and the glass material has higher imaging quality.
- the first lens can be fixed by the retaining ring.
- FIG. 5 it is a schematic diagram of a retaining ring for fixing the first lens provided in the present application.
- the retaining ring includes at least one opening, and one end of the first lens is inserted into the retaining ring from the opening of the retaining ring.
- the first surface of the snap ring is flat and can scatter the received laser beam, that is, the first surface of the snap ring is flat and the surface has certain scattering properties.
- the first face of the snap ring is opposite to the direction of the laser beam emitted by the triangulation ranging lidar (see Figure 17a below). In this way, it is helpful to improve the utilization rate of the light beam.
- the optical imaging component includes a folded optical path optical component.
- FIG. 6a a schematic structural diagram of an optical imaging assembly provided by the present application.
- the optical imaging assembly sequentially includes a polarizer, a first 1/4 wave plate, a first half mirror, a second 1/4 wave plate and a reflective polarizer along the direction of the main optical axis of the first half mirror. .
- the optical path can be seen in Fig. 6b.
- the polarizer is used to filter the polarization state of the image-forming light from the display assembly to the same polarization state (i.e., the first linear polarization), such as horizontal linearly polarized light or vertical linearly polarized light, which can be absorptive or reflective.
- the first linearly polarized light may be, for example, P-polarized light or S-polarized light.
- the 1/4 wave plate is used to convert the first linearly polarized light from the polarizer to the first circularly polarized light, and transmit the first circularly polarized light to the first half mirror.
- the first transflective lens is used to transmit the first circularly polarized light from the first 1/4 wave plate to the second 1/4 wave plate; the second 1/4 wave plate is used to transmit the received first circular
- the polarized light is converted to the second linearly polarized light, and the polarization direction of the second linearly polarized light is the same as that of the first linearly polarized light; the reflective polarizer is used to reflect the second linearly polarized light from the second 1/4 wave plate to the second 1/4 wave plate.
- the second 1/4 wave plate is also used to convert the received second linearly polarized light into a second circularly polarized light.
- the rotation direction of the second circularly polarized light is the same as that of the first circularly polarized light.
- Figure 6b uses left-handed circularly polarized light as Example; the first transflective lens is also used to reflect the second circularly polarized light from the second 1/4 wave plate into a third circularly polarized light, and the rotation direction of the third circularly polarized light is opposite to that of the second circularly polarized light; The second 1/4 wave plate is also used to convert the third circularly polarized light from the transflective lens to the third linearly polarized light; the reflective polarizer is also used to transmit the third linearly polarized light to the human eye to form an image.
- the folded optical path optical assembly may further include one or more aberration compensation lenses.
- These aberration compensation lenses can be used for aberration compensation. For example, it can be used to compensate spherical aberration, coma, astigmatism, distortion and chromatic aberration in the imaging process of spherical or aspherical lenses.
- These aberration compensation lenses can be located anywhere in the folded optical path. For example, these aberration compensation lenses may be located between the first half mirror and the reflective polarizer.
- Figure 6a takes an example including aberration compensation lens 1 and aberration compensation lens 2.
- Aberration compensation lens 1 is located between the polarizer and the display component
- aberration compensation lens 2 is located between the reflective polarizer and the human eye.
- the aberration compensation lens may be a single spherical lens or aspheric lens, or a combination of multiple spherical or aspheric lenses, wherein the combination of multiple spherical or aspheric lenses can improve the imaging quality of the system and reduce the Aberrations of the system.
- the material of the aberration compensation lens may be optical resin, and the material of the aberration compensation lens 1 and the aberration compensation lens 2 may be the same or different.
- the optical imaging component can be fixed in the lens barrel, please refer to Fig. 6c. It should be understood that the optical imaging assembly in the above structure 1 can also be fixed in the lens barrel.
- the imaging optical path can be folded, the imaging optical path can be shortened, thereby helping to reduce the volume of the optical imaging assembly, thereby helping to reduce the display model including the optical imaging assembly. volume of the group.
- FIG. 6d a schematic structural diagram of a semi-transparent mirror provided by the present application.
- the most approximate spherical radius of the transmission surface of the half mirror is r1, r1 is a negative number to indicate a concave surface, and r1 is a positive number to indicate a convex surface;
- the most approximate spherical radius of the semi-transparent and half-reverse surface of the half mirror is r2, r2
- a negative number represents a convex surface, a positive number r2 represents a concave surface, and the refractive index of the material of the half mirror is n.
- the optical imaging assembly includes a second half mirror and a second lens.
- the display assembly may include a first display screen and a second display screen, and the resolution of the first display screen is higher than that of the second display screen.
- FIG. 7 it is a schematic structural diagram of another optical imaging assembly provided by the present application.
- the optical imaging assembly includes a second half mirror and a second lens.
- the first display screen is used to display the central area of the image;
- the second display screen is used to display the edge area of the image;
- the second half mirror is used to reflect the central area of the image from the first display screen to the second lens, and transmits the edge area of the image from the second display screen to the second lens;
- the second lens is used to combine the central area of the image from the second half mirror and the edge area of the image into an image and project it to the human eye , and form a complete virtual image at the target location.
- the real situation of the human eye can be simulated, and the real feeling similar to the human eye can be realized by using a small number of pixels. It should be understood that the human eye has a high resolution of 1' in the foveal region ( ⁇ 3°), and the resolution of the peripheral vision drops to around 10'.
- the optical imaging assembly may further include a third lens and a fourth lens.
- the third lens is used for converging the central area of the image from the first display screen, and the central area of the condensed image is transmitted to the second half mirror; the fourth lens is used for converging the image from the first display screen.
- the edge region of the image of the screen is converged, and the edge region of the converged image is propagated to the second half mirror.
- the optical imaging assembly can be fixed in the lens barrel, and connected to the virtual image position adjustment assembly through components such as a cam or a lead screw.
- the optical imaging component includes a multi-channel lens.
- FIG. 8 it is a schematic structural diagram of another optical imaging assembly provided by the present application.
- the optical imaging component is a multi-channel lens.
- the multi-channel lens is formed by successively connecting M pairs of reflective surfaces with free-form surface lenses, where M is an integer greater than 1.
- Figure 8 is exemplified by including two channels (ie, channel 1 and pass 2).
- Each channel in a multi-channel lens can correspond to a smaller field of view (FOV). That is to say, a multi-channel lens can decompose a large FOV into a combination of multiple smaller FOVs, and one smaller FOV corresponds to one channel.
- FOV field of view
- the image quality of the edge FOV with a large FOV is difficult to control, a combination of multiple lenses is usually required to correct the aberration of the edge FOV.
- the large FOV can be decomposed into multiple smaller ones due to the multi-channel lens.
- FOV which helps to improve the imaging quality of the edge FOV, and the required aperture of the optical imaging lens can be reduced, and does not require a lens for correcting the aberration of the edge FOV, therefore, it helps to reduce the The volume of the optical imaging assembly.
- the optical imaging assembly can be fixed in the lens barrel, and is connected to the virtual image position adjustment assembly through components such as a cam or a lead screw.
- the optical imaging component includes a micro lens array (Micro lens array, MLA).
- MLA Micro lens array
- the optical imaging assembly may include two microlens arrays.
- Each microlens in the microlens array may correspond to a smaller FOV, and each smaller FOV may be imaged by one microlens. That is, the microlens array can decompose a large FOV into a composite of multiple smaller FOVs.
- the image quality of the edge FOV with a large FOV is difficult to control, a combination of multiple lenses is usually required to correct the aberration of the edge FOV.
- the large FOV can be decomposed into multiple smaller ones due to the microlens array.
- FOV thereby helping to improve the imaging quality of the edge FOV, and through the optical imaging assembly, the required aperture of the optical imaging lens can be reduced, and the lens for correcting the aberration of the edge FOV is not required. Therefore, Helps reduce the size of the optical imaging assembly.
- the display module may include two microlens arrays, each of which corresponds to a display screen, see FIG. 9 above.
- the optical imaging component can be fixed in the lens barrel, and is connected with the virtual image position adjustment component through components such as a cam or a lead screw.
- the optical imaging component includes Alvarez lenses.
- FIG. 10 it is a schematic structural diagram of still another optical imaging assembly provided by the present application.
- the optical imaging assembly includes an Alvarez lens.
- the Alvarez lens includes two or more refractive lenses (or called free-form surface lenses). Among them, every two refractive lenses are a group, which can be called a refractive lens group.
- FIG. 10 takes the example of the Alvarez lens including the refractive lens 1 and the refractive lens 2 as an example.
- the imaging lens assembly includes a Morie lens.
- FIG. 11 it is a schematic structural diagram of still another optical imaging assembly provided by the present application.
- the optical imaging assembly includes a Maury lens, which may include a cascade of two or more diffractive optical elements.
- the Morrie lens in FIG. 11 takes the cascading of the diffractive optical element 1 and the diffractive optical element 2 as an example.
- the optical imaging component is a liquid crystal lens.
- FIG. 12a a schematic structural diagram of a liquid crystal lens provided by the present application.
- the liquid crystal lens is an ordinary liquid crystal lens, which can change the direction of the long axis of the liquid crystal molecules by changing the form of the applied electric field to generate optical anisotropy and medium anisotropy, so that a tunable refractive index can be obtained, which can be changed.
- the equivalent phase of the liquid crystal lens can change the focal length of the liquid crystal lens.
- the equivalent phase of the liquid crystal lens may be the phase of a common lens realized by applying a voltage signal or a current signal, or may be the phase of a Fresnel lens.
- FIG. 12b it is a schematic structural diagram of another liquid crystal lens provided by the present application.
- the liquid crystal lens can also be a reflective liquid crystal on silicon (LCOS), which can change the direction of the long axis of the liquid crystal molecules by changing the applied voltage signal or current signal, so as to change the refractive index of the light passing through it, so that the Change the focal length of the liquid crystal lens.
- LCOS reflective liquid crystal on silicon
- FIG. 12c it is a schematic structural diagram of another liquid crystal lens provided by the present application.
- the liquid crystal lens may also be a liquid crystal geometric phase (pancharatnam berry, PB) lens. is a lens function based on geometric phase generation.
- the zoom of the liquid crystal PB lens can be changed by changing the direction of the long axis of the liquid crystal molecules in the liquid crystal PB lens or the polarization state of the incident light entering the liquid crystal PB lens.
- the liquid crystal PB lens can be divided into two types: active type and passive type.
- the active liquid crystal PB lens is mainly made of liquid crystal material in the liquid crystal state.
- the liquid crystal material in the liquid crystal state has fluidity, and the direction of the long axis of the liquid crystal molecule can be changed by applying a voltage signal or a current signal to achieve zooming.
- the passive liquid crystal PB lens has better thermal stability and higher resolution.
- the passive liquid crystal PB lens is mainly composed of liquid crystal polymer materials, which can be polymerized by exposure and other methods to form a solid polymer (Polymer), and can achieve zoom by changing the polarization state of the incident light.
- the focal length of the left-handed circularly polarized light is 1 m
- the focal length of the right-handed circularly polarized light is -1 m, as shown in Figure 13a.
- the polarization state of the incident light can be changed by using an electronically controlled half-wave plate or an electronically controlled twisted nematic liquid crystal (TNLC), as shown in Figure 13b.
- TNLC electronically controlled twisted nematic liquid crystal
- the optical imaging component is a liquid lens.
- FIG. 14a a schematic structural diagram of a liquid lens provided by the present application.
- the liquid lens can change the shape of the film material by changing the applied voltage signal or current signal, while the liquid is injected into or out of the liquid lens, thereby changing the focal length of the liquid lens.
- FIG. 14b a schematic structural diagram of another liquid lens provided by the present application.
- the liquid lens can use the principle of electro-wetting to change the surface shape of the interface between two immiscible liquids by changing the applied voltage signal or current signal, thereby changing the focal length of the liquid lens.
- the optical imaging component is a deformable mirror.
- FIG. 15 it is a schematic structural diagram of a deformable mirror provided by the present application.
- the deformable mirror can be a discrete or continuous micro-reflecting surface, which uses electrostatic force or electromagnetic force to drive the deformation or displacement of the micro-reflecting surface, and realizes different reflecting surface types by adjusting the voltage signal or current signal of the discrete electrode, so that it can be achieve zoom.
- the reflective surface can be a concave mirror, and the curvature of the concave mirror can be adjusted by a voltage signal or a current signal, and the focal lengths of the concave mirrors with different curvatures are different.
- users can also use other more computationally-based optical structures, such as computational display digital zoom and holographic display, to adjust the virtual image position, which is not limited in this application.
- computationally-based optical structures such as computational display digital zoom and holographic display
- a cylindrical lens and a rotating drive assembly are required, and the rotating drive assembly is used to change the optical axis of the cylindrical lens.
- the cylindrical lens may be located between the above-mentioned optical imaging assembly and the display assembly, or located on the side of the optical imaging assembly away from the display assembly, that is, between the optical assembly and the human eye.
- optical imaging components of the various structures described above it is possible to form a virtual image at the target position.
- the optical path for forming the virtual image can be referred to the optical path of FIG. 2b above.
- the virtual image position adjustment component can be used to adjust the optical imaging component and/or the display component to adjust the virtual image to the target position.
- the two cases are introduced separately.
- the virtual image position adjusting assembly adjusts the optical imaging assembly and/or the display assembly through mechanical adjustment.
- the virtual image position adjustment component may adjust the virtual image to the target position by driving the optical imaging component and/or the display component to move.
- the virtual image position adjustment assembly can be used to move the display assembly, and the optical imaging assembly does not move, see FIG. 16a; or, the virtual image position adjustment assembly can be used to move the optical imaging assembly, and the display assembly does not move, see FIG. 16b; or , the virtual image position adjustment assembly can be used to move the display assembly and move the optical imaging assembly, see Figure 16c.
- the optical imaging assemblies in Figures 16a, 16b and 16c are exemplified by lenses.
- adjusting the optical imaging assembly and/or the display assembly in a mechanical adjustment manner can be further divided into an automatic adjustment mode and a manual adjustment mode.
- the virtual image position adjustment component may include a driving component; the driving component is used to drive the optical imaging component and/or the display component to move, and adjust the virtual image to the target position .
- the driving component may drive the display component and/or the optical imaging component to move according to the received distance of the display component and/or the optical imaging component to be moved, and adjust the virtual image to the target location.
- the virtual image position adjustment assembly may include a drive assembly and a position sensing assembly.
- the position sensing component is used to determine the position of the optical imaging component and/or the display component.
- the position sensing assembly may send the determined position of the optical imaging assembly and/or the display assembly to the control assembly.
- the control component may determine the first distance between the display component and the optical imaging component according to the position of the optical imaging component and/or the display component, and determine the optical imaging component and the optical imaging component according to the first distance. /Or display the to-be-moved distance of the component movement, and send the to-be-moved distance to the drive component.
- the distance to be moved may be carried in a control command sent by the control assembly to the drive assembly.
- the position sensing component is used to determine the first distance between the optical imaging component and/or the display component, and send the first distance to the control component.
- the control component may determine the to-be-moved distance of the optical imaging component and/or the display component according to the first distance and the target position of the virtual image, and send the to-be-moved distance to the drive component.
- the distance to be moved may be carried in a control command sent by the control assembly to the drive assembly.
- the driving component is configured to drive the optical imaging component and/or the display component to move according to the distance to be moved, so as to adjust the virtual image to the target position.
- the drive assembly may be a motor and a transmission element.
- the motor can be used to drive the transmission element to rotate; the transmission element can be used to drive the movement of the display assembly and/or the optical imaging assembly under the action of the motor.
- the motor can be functionally divided into an open loop motor (open loop) and a closed loop motor (close loop motor).
- Open loop and closed loop are two concepts in automatic control.
- Open loop means that the input motor is the current signal, the motor output is displacement, there is no feedback control, so it is called open loop.
- the closed loop motor can use the closed loop system to precisely adjust the optical imaging assembly and/or the display assembly through the feedback of the position.
- a closed-loop motor is usually equipped with a position sensor at the position of the carrier of the optical imaging component. Taking the Hall sensor as an example, the magnetic flux of the surrounding magnets is sensed by the Hall chip, and the actual position of the optical imaging component is calculated. After the Hall chip is introduced, the control of the motor can be changed from the original input current signal output displacement to the input displacement output displacement. The motor can continuously adjust the position of the motor according to the feedback of the Hall chip.
- the motor may be, for example, a stepping motor, a DC motor, a silent motor, a servo motor (or referred to as a servo motor), a voice coil motor, or the like.
- the servo motor is a closed-loop motor. Stepper motors, DC motors, silent motors, and voice coil motors are typically open loop motors. Stepper motor and silent motor can improve driving precision.
- the silent motor is, for example, an ultrasonic motor (USM).
- the ultrasonic motor drives the piezoelectric material through ultrasonic signals to deform it, and then the deformation of the piezoelectric material is transmitted to the rotor or rotating ring through friction and mechanical motion, thereby produce rotational motion.
- There are two types of ultrasonic motors one is a ring USM, which can be directly driven outside the lens barrel without the need for a reduction gear, but it will limit the diameter of the lens barrel.
- the other is a micro USM.
- a transmission element to drive the structure that fixes the optical imaging component (such as a lens barrel or a retaining ring), but it is smaller and does not limit the diameter of the lens barrel.
- Using the USM motor can reduce noise, and has high speed, large torque, and a wide operating temperature range.
- the voice coil motor is also called voice coil motor (VCM).
- VCM voice coil motor
- the main working principle is to control the current signal by changing the DC current signal of the coil in the voice coil motor in a permanent magnetic field.
- the stretched position of the spring plate in the voice coil motor thereby driving the movement of the object fixed to it.
- the voice coil motor itself does not know when to start the movement, nor where the movement ends, and needs to be driven to process and control.
- the driving chip receives a control command (eg, the first control command or the second control command or the third control command below) from the control component, and outputs a current signal to the voice coil motor, thereby driving the voice coil motor to move.
- a voice coil motor using a position sensor knows where the coil is.
- the transmission element may be, for example, a lead screw, a screw, a gear or a cam cylinder, and the like.
- a lead screw such as a ball screw, converts rotary motion to linear motion, and vice versa.
- the lead screw has high precision, reversibility and high efficiency.
- the position sensing component may be a triangulation ranging lidar (refer to the introduction of the triangular ranging lidar), or a position encoder.
- the position encoder can be, for example, a grating scale, a magnetic encoder.
- Position encoders can convert angular displacement into electrical signals, such as angle encoders; or can also convert linear displacements into electrical signals.
- the following takes the virtual image position adjustment assembly for moving the optical imaging assembly as an example
- the optical imaging assembly takes the above structure 2 as an example
- the position sensing component is a triangular laser ranging radar
- FIG. 17a a schematic structural diagram of a display module provided by the present application.
- the triangular ranging lidar can be fixed with the display component, for example, the triangular ranging lidar can be fixed on the substrate where the display component is located; the first semi-transparent mirror can be fixed by a snap ring.
- the triangulation ranging lidar can be used to emit a laser beam to the first face of the snap ring, and the first face of the snap ring can be used to reflect the laser beam.
- the triangulation ranging lidar can determine the A first distance between the first half mirror and the display assembly.
- FIG. 1 b For the specific measurement principle, reference may be made to the introduction of FIG. 1 b, which will not be repeated here.
- the triangulation ranging lidar may send position information to the control component, where the position information includes the first distance between the display assembly and the first half mirror measured by the triangulation ranging lidar.
- the control assembly may be configured to receive position information from the triangulation ranging lidar, where the position information is used to indicate the first distance between the display assembly and the first half mirror.
- the control component can determine the to-be-moved distance of the first semi-transparent mirror according to the position information and the target position of the virtual image, generate a first control command according to the to-be-moved distance, and send the first control command to the drive component.
- the instruction is used to instruct the drive assembly to drive the snap ring to move, thereby driving the first half mirror to move along the direction of the main optical axis.
- the control component may be configured to determine the to-be-moved distance of the first half mirror according to the corresponding relationship between the first distance and the position of the virtual image.
- the control component can be used to determine the virtual image according to the distance A (ie the first distance) between the display component and the optical imaging component carried by the position information and the correspondence between the first distance and the position of the virtual image (such as Table 3).
- the distance A ie the first distance
- the control component may read from the memory after receiving the first distance.
- the first control command may include the distance S to be moved of the first half mirror
- the driving component may be configured to drive the snap ring to move the distance S according to the received first control command
- the card The ring can drive the first half mirror to move the distance S, so as to adjust the virtual image to the target position.
- the position sensing component can measure the actual distance Y between the optical imaging component and the display component again. That is, the position sensing assembly can measure the positions of the optical imaging assembly and the display assembly in real time, so as to determine whether the virtual image is formed at the target position. Further, the position sensing component can be used to send this actual distance Y to the control component. The control assembly can be used to determine whether the optical imaging lens assembly needs to be further adjusted according to the theoretical distance X and the actual distance Y.
- the optical imaging component and the display component is X, but due to the driving error of the driving component (refer to the related description below), the optical imaging component and the display component The actual distance Y between components may be different from X.
- the position sensing component can be used to feed back the first indication signal to the control component, and the first indication signal is used to indicate that no further adjustment is required; if The position sensing component can be used to feed back a third control command to the control component, and the third control command can include the distance
- the control assembly can be configured to send the third control instruction to the drive assembly according to the received third control instruction.
- the driving component can be used to drive the first half mirror to move
- the position sensing component is a position encoder, as shown in FIG. 17b , it is a schematic structural diagram of another display module provided by the present application.
- the position encoder can be fixed on the substrate where the display component is located, the optical imaging component can be fixed by the lens barrel, the lens barrel is fixed with the sliding component, and the sliding component can drive the first semi-transparent mirror to move when moving.
- the position encoder can determine the position of the first half mirror by measuring the position of the sliding assembly.
- the sliding component may be a sliding block.
- the position encoder may send position information to the control assembly, the position information including the position of the first half mirror measured by the position encoder.
- the control assembly can be configured to receive position information from the position encoder, the position information is used to indicate the position of the first half mirror, and determine the distance between the display assembly and the first half mirror according to the position information Determine the distance to be moved of the first semi-transparent mirror according to the first distance and the target position of the virtual image, and generate a first control command according to the distance to be moved, and send the first control command to the drive assembly, the first A control command is used to instruct the driving component to drive the transmission element to rotate, so as to drive the sliding component to move, thereby driving the first semi-transparent mirror to move.
- control component may be configured to determine the to-be-moved distance of the first half mirror according to the corresponding relationship between the first distance and the position of the virtual image. It should be understood that the first half mirror moves along the direction of the main optical axis of the first half mirror.
- the driving components of the position sensing components can be integrated together, as shown in FIG. 17c.
- the optical imaging component moves the distance ⁇ d
- the virtual image formed by the optical imaging component on the image displayed by the display component can be moved by the distance ⁇ z, as shown in FIG. 17d .
- the optical imaging assembly is represented by a half mirror in FIG. 17d.
- the position sensing component can be used in the direction perpendicular to the main optical axis of the refractive lens (ie, the horizontal direction shown in FIG. 10 ). direction) to determine the first distance between the two refractive optical elements (eg, the distance between the centers of refractive lens 1 and refractive lens 2). Further, optionally, the position sensing assembly may send position information to the control assembly, the position information including the first distance between the two refractive lenses measured by the position sensing assembly.
- control assembly is operable to receive position information from the position sensing assembly, the position information being used to indicate the first distance between the two refractive lenses.
- the control component can determine the to-be-moved distance of the two refraction lenses according to the position information and the target position of the virtual image, and generates a first control command according to the to-be-moved distance, and sends the first control command to the driving component. for instructing the driving component to drive at least one of the two refractive lenses to move in a direction perpendicular to the optical axis of the refractive lenses.
- the driving component can be used to drive at least one of the two refractive lenses to move in a direction perpendicular to the optical axis of the refractive lenses according to the received first control instruction.
- the position sensing assembly is used to determine the relative angle of the diffractive optical element 1 and the diffractive optical element 2 respectively. Further, optionally, the position sensing assembly can send position information to the control assembly, where the position information includes the relative angle of the diffractive optical element 1 and the diffractive optical element 2 . Accordingly, the control assembly can be used to receive position information from the position sensing assembly, the position information being used to indicate the relative angle of the diffractive optical element 1 and the diffractive optical element 2 .
- the control component can determine the to-be-rotated angle of the two diffractive optical elements according to the position information and the target position of the virtual image, generate a first control command according to the to-be-rotated angle, and send the first control command to the drive component, where the first control command is used to indicate
- the driving assembly drives the diffractive optical element 1 and the diffractive optical element 2 to rotate in opposite directions, or is used to instruct the driving assembly to drive one of the diffractive optical element 1 and the diffractive optical element 2 to rotate.
- the driving component can be used to drive the diffractive optical element 1 and the diffractive optical element 2 to rotate in opposite directions, or drive one of the diffractive optical element 1 and the diffractive optical element 2 to rotate according to the received first control instruction.
- the control assembly may be configured to determine the to-be-rotated angle according to the corresponding relationship between the relative angle and the position of the virtual image.
- the distance to be moved or the angle to be rotated of the optical imaging assembly and/or the display assembly can be obtained through simulation in advance, and stored in the memory of the display module or can be called by the display module. external memory.
- the virtual image position adjustment component has a certain adjustment precision and adjustment range when adjusting the optical imaging component and/or the display component.
- the adjustment precision and adjustment range of the virtual image position adjustment component are described in detail below.
- the adjustment range of the virtual image position adjustment component is determined according to the driving range of the driving component and the measurement range of the position sensing component. Further, optionally, both the driving range of the driving component and the measurement range of the position sensing component are related to the optical parameters of the optical imaging component.
- the adjustment accuracy of the virtual image position adjustment component is determined according to the driving error of the driving component and the position measurement error of the position sensing component. Further, optionally, both the driving error of the driving component and the position measurement error of the position sensing component are related to the optical parameters of the optical imaging component.
- the drive error of the drive assembly in order to ensure that the adjustment accuracy of the virtual image position adjustment assembly is not greater than 0.2D, the drive error of the drive assembly should meet: Further, optionally, in order to ensure that the adjustment accuracy of the virtual image position adjustment assembly is not less than 0.1D, the drive error of the drive assembly should satisfy: In order to ensure that the adjustment accuracy of the position of the virtual image is not greater than 0.2D, the position measurement error of the position sensing component should satisfy: Further, optionally, in order to ensure that the adjustment accuracy of the position of the virtual image is not less than 0.1D, the position measurement error of the position sensing component should satisfy:
- the cam focusing mechanism may include a first knob, and the first knob is used to select a preset scene type to which the first object belongs.
- the preset scene types in Figure 18a are four examples: office scene type, conference scene type, interactive game scene type and video scene type. The user can rotate the pointer to a certain position by selecting the first knob, and the object indicated by the pointer It is the selected preset scene type.
- the cam focusing mechanism may further include a guide column (or a guide cylinder), please refer to Fig. 18b.
- the first knob can be connected to one end of the guide post (or guide cylinder) through a mechanical structure, and the other end of the guide post (or guide cylinder) is connected to the optical imaging assembly.
- the guide column (or guide cylinder) can be driven to drive the optical imaging assembly to move, and the virtual image has been formed at the target position.
- the cam focusing mechanism may further include a second knob, the second knob is used to adjust the visual acuity parameter, please refer to Figure 18c, the second knob can also be marked with a corresponding scale, and the scale identifies the visual acuity parameter .
- scales 1 to 7 indicate degrees from 100 to 700 degrees.
- the cam focusing mechanism is used to select the preset scene type to which the first object belongs, set the vision parameters, and drive the optical imaging component to move.
- the manual adjustment mechanism is adopted, and the driving of the driving component (such as a motor) is not required, which helps to reduce the display mode. group cost.
- the optical imaging assembly includes a zoom lens, for example, the zoom lens exemplified in the above-mentioned structures eight to ten.
- the virtual image position adjustment component includes a driving component, and the driving component is configured to change the voltage signal or current signal applied to the zoom lens, change the focal length of the zoom lens, and adjust the The virtual image is adjusted to the target position.
- the virtual image position adjustment assembly may include a drive assembly and a position sensing assembly.
- the position sensing component can be used to determine the first focal length of the zoom lens, and the first focal length is used to determine the focal length of the zoom lens to be adjusted.
- the driving component can be configured to change the voltage signal or the current signal applied to the zoom lens according to the focal length to be adjusted, so as to adjust the virtual image to the target position. It should be understood that the first focal length of the zoom lens includes the current focal length of the zoom lens.
- the virtual image position adjustment component can change the voltage signal or current signal applied to the zoom lens, By changing the focal length of the focal lens, the virtual image can be adjusted to the target position. It should be understood that the relationship between the focal length of the zoom lens and the voltage signal or current signal may be determined by the control assembly.
- the virtual image position adjustment component may be an electronically controlled half-wave plate or TNLC.
- the electronically controlled half-wave plate or TNLC can change the focal length of the focal lens by changing the polarization state of the incident light, so that the virtual image can be adjusted to the target position.
- the relationship between the focal length of the zoom lens and the polarization state of the incident light may be determined by the control assembly.
- the virtual image position adjustment assembly may include a drive assembly and a position sensing assembly.
- the driving component is a set of circuit boards that can generate a specific voltage signal or current signal
- the position sensing component is another set of circuit boards that can be used to measure the voltage signal or current signal applied to the optical imaging component.
- the driving assembly can change the focal length of the focus lens by changing the electrostatic force or electromagnetic force applied to the zoom lens, so that the virtual image can be adjusted to the target position. It should be understood that the relationship between the focal length of the zoom lens and the electrostatic force (or electromagnetic force) may be determined by the control assembly.
- the user can clearly view the image displayed by the display component, and it can help reduce the conflict of vergence adjustment.
- the display module may further include a control component.
- control component may be, for example, a control component such as a processor, a microprocessor, a controller, etc., for example, a general-purpose central processing unit (central processing unit, CPU), a general-purpose processor, a digital signal processing ( digital signal processing, DSP), application specific integrated circuits (ASIC), field programmable gate array (FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
- a control component such as a processor, a microprocessor, a controller, etc.
- CPU central processing unit
- DSP digital signal processing
- ASIC application specific integrated circuits
- FPGA field programmable gate array
- control component In a possible implementation manner, for the functions performed by the control component, reference may be made to the foregoing related description, which will not be repeated here.
- the position of the virtual image may be determined according to the image displayed by the display component or the first object selected on the first interface under normal vision; further, the target of the virtual image may be determined according to the acquired vision parameters Location. It can also be understood that the position of the virtual image is adjusted first according to the image displayed by the display component or the first object selected on the first interface, and then the virtual image is finely adjusted to the target position according to the vision parameter.
- the display module does not have a vision adjustment function, which can provide a larger eye relief, and the user can wear glasses to use the display module.
- Implementation mode b the display module does not have a vision adjustment function, and provides a suitable space for the user to place customized lenses, such as myopia lenses of different degrees.
- the display module can realize myopia adjustment by using a passive liquid crystal PB lens.
- the adjustment of myopia requires a power of ⁇ 7D
- the zoom lens needs to provide a total zoom capability of 11D (that is, the measurement range)
- the adjustment accuracy of the virtual image surface is 0.25D, it needs to provide 44 virtual image positions, corresponding to 6 passive type Liquid crystal PB lens.
- control assembly may be integrated on the display module, that is, the control assembly and the display module are integrated into an integrated device; or the control assembly of the terminal device where the display module is located may be used separately.
- the display module may include a control component and a memory, and may be called an all-in-one machine.
- the display module may not include control components and memory, and may be called a split machine.
- the display module does not include a control component nor a memory, and includes a micro processing unit, which may also be called a split machine.
- FIG. 19 it is a schematic structural diagram of another display module provided by the present application.
- the display module includes a display component 1901 , an optical imaging component 1902 , a virtual image position adjustment component 1903 and a control component 1904 .
- the display component 1901, the optical imaging component 1902, the virtual image position adjustment component 1903, and the control component 1904 reference may be made to the foregoing related descriptions, which will not be repeated here.
- the present application can also provide a head-mounted display device, and the head-mounted display device can include a control assembly and the display module in any of the above embodiments. It can be understood that the head-mounted display device may also include other devices, such as wireless communication devices, sensors, and memory.
- the present application provides a method for adjusting the position of a virtual image, please refer to the introduction of FIG. 20 and FIG. 21 .
- the method for adjusting the position of the virtual image can be applied to the display module shown in any of the above embodiments in FIG. 3 to FIG. 19 . It can also be understood that the method for adjusting the position of the virtual image can be implemented based on the display module shown in any of the above embodiments in FIGS. 3 to 19 .
- the target position based on the virtual image is determined based on the preset scene type to which the displayed image belongs or is determined based on the preset scene type to which the object selected by the user belongs, respectively.
- the position of the virtual image is adaptively adjusted based on the preset scene type to which the image belongs.
- FIG. 20 a schematic flowchart of a method for adjusting the position of a virtual image provided by the present application. The method includes the following steps:
- Step 2001 acquiring an image displayed by a display component.
- Step 2002 acquiring the target position of the virtual image corresponding to the image.
- the target position of the virtual image is related to the preset scene type to which the image belongs. For details, please refer to the foregoing related description, which will not be repeated here.
- Step 2003 controlling the virtual image position adjustment component to adjust the optical imaging component and/or the display component to form a virtual image at the target position.
- the above steps 2001 to 2003 may be executed by a control component in the display module.
- the display module to which the method for adjusting the position of the virtual image shown in FIG. 20 is applied includes a control component.
- FIG. 21 a schematic flowchart of another method for adjusting the position of a virtual image provided by the present application. The method includes the following steps:
- Step 2101 displaying a first interface.
- this step 2101 may be performed by the display component in the display module.
- the display component for details, please refer to the above-mentioned related description about the display component displaying the first interface. Repeat again.
- Step 2102 When the user selects the first object in the first interface, acquire the target position of the virtual image corresponding to the first object.
- the target position of the virtual image is related to the preset scene type to which the first object belongs, and reference may be made to the foregoing related description, which will not be repeated here.
- the target position of the virtual image is related to the preset scene type to which the first object belongs, and reference may be made to the foregoing related description, which will not be repeated here.
- two ways of acquiring the target position corresponding to the first object may be exemplarily shown based on whether the head mounted display device includes a control component.
- the head mounted display device includes a control assembly.
- acquiring the target position corresponding to the first object may include the following steps:
- Step A the control component acquires the second preset scene type to which the first object belongs.
- control component may receive the second preset scene type to which the first object belongs and sent by the terminal device; or, the control component may also determine the first preset scene type to which the first object belongs.
- Step B the control component acquires the corresponding relationship between the preset scene type and the position of the virtual image.
- step B reference may be made to the related introduction of the aforementioned step b in FIG. 22 , and details will not be repeated here.
- Step C the control component determines the target position corresponding to the second preset scene type according to the corresponding relationship between the preset scene type and the position of the virtual image.
- the position corresponding to the second preset scene type may be found from the correspondence between the preset scene type and the position of the virtual image, which is the target position.
- the head mounted display device does not include the control assembly.
- the head-mounted display device can receive the target position of the virtual image corresponding to the first object sent by the terminal device.
- the terminal device determines the target position of the virtual image corresponding to the image, reference may be made to the related introduction in FIG. 24 below, which will not be repeated here.
- Step 2103 for the image displayed by the display component triggered by the selection of the first object, control the virtual image position adjustment component to adjust the optical imaging component and/or the display component to form a virtual image at the target position.
- step 2103 reference may be made to the foregoing description of the adjustment of the optical imaging component and/or the display component, which will not be repeated here. It should be noted that this step 2103 may be executed by the control component of the display module, or may also be executed by the terminal device.
- the present application provides yet another method for adjusting the position of a virtual image, please refer to the introduction of FIG. 22 and FIG. 23 .
- the method for adjusting the position of the virtual image can be applied to a head-mounted display device.
- the following is an introduction based on the above-mentioned case A and case B, respectively.
- the present application provides a method for adjusting the position of a virtual image, please refer to the introduction of FIG. 22 .
- the method for adjusting the position of the virtual image can be applied to a head-mounted display device.
- FIG. 22 a schematic flowchart of a method for adjusting the position of a virtual image provided by the present application. The method includes the following steps:
- Step 2201 Acquire an image displayed by the head-mounted display device.
- it may be an image sent by the receiving terminal device, or may also be an image sent by a projection system in a head-mounted display device.
- Step 2202 acquiring the target position of the virtual image corresponding to the image.
- the target position of the virtual image is related to the preset scene type to which the image belongs.
- the target positions of the virtual images presented by the head-mounted display device are different.
- the preset scene type to which the image belongs is a conference scene type
- the distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component is 0.583D; or, when the preset scene type to which the image belongs is 0.583D
- the distance between the target position where the virtual image is presented by the head-mounted display device and the optical imaging component is 1D
- the preset scene type to which the image belongs is a video scene type
- the head-mounted display device The distance between the target position where the virtual image is presented and the optical imaging component is 0.5D.
- the preset scene type to which the image belongs may be the preset scene type to which the content of the image belongs. Alternatively, it may also be a preset scene type to which the object corresponding to the image belongs, and the application corresponding to the image can be understood as the image displayed after entering the application.
- the head-mounted display device includes a control component
- two ways of acquiring the target position corresponding to the image are exemplarily shown.
- Mode A based on the head-mounted display device including a control component.
- acquiring the target position corresponding to the image may include the following steps:
- Step a the control component acquires the first preset scene type to which the image displayed by the head mounted display device belongs.
- the head-mounted display device may determine the first preset scene type to which the image belongs (for the specific determination process, please refer to the foregoing related description, It will not be repeated here).
- Step b the control component acquires the correspondence between the preset scene type and the position of the virtual image.
- the head-mounted display device may further include a memory, and the corresponding relationship between the preset scene type and the position of the virtual image may be stored in the memory of the head-mounted display device.
- the head-mounted display device may include a control component and a memory, that is, an all-in-one machine.
- the head mounted display device may also not include memory.
- the correspondence between the preset scene type and the position of the virtual image can be stored in a memory other than the head-mounted device, for example, in the memory of the terminal device, and the head-mounted display device can obtain the preset scene type and The correspondence between the positions of the virtual images.
- Step c the control component determines the target position corresponding to the first preset scene type according to the corresponding relationship between the preset scene type and the position of the virtual image.
- the position corresponding to the first preset scene type may be found from the correspondence between the preset scene type and the position of the virtual image, which is the target position.
- Mode B based on the head-mounted display device does not include a control component.
- the target position of the virtual image corresponding to the image sent by the terminal device can be received.
- the process of determining the target position of the virtual image corresponding to the image by the terminal device reference may be made to the related introduction in FIG. 24 below, which will not be repeated here.
- Step 2203 forming a virtual image of the image at the target position.
- step 2203 may be implemented by the control component in the head mounted display device controlling the virtual image position adjustment component; or, the terminal device may also control the virtual image position adjustment component.
- the head-mounted display device determines the to-be-moved distance of the display component and/or the optical imaging component.
- the head-mounted display device includes a display component and an optical imaging component.
- the first distance between the display component and the optical imaging component can be acquired, and according to the first distance and the target position, the to-be-moved distance of the display component and/or the optical imaging component is determined, and then the display component is driven according to the to-be-moved distance And/or the optical imaging assembly moves to adjust the virtual image to the target position.
- the head-mounted display device receives the to-be-moved distance of the display component and/or the optical imaging component sent by the terminal device.
- the head-mounted display device includes a display component and an optical imaging component. Specifically, it can receive the distance to be moved of the display assembly and/or the optical imaging assembly sent by the terminal device, and drive the display assembly and/or the optical imaging assembly to move according to the distance to be moved to adjust the virtual image to the target position.
- the distance to be moved of the display component and/or the optical imaging component determined by the terminal device reference may be made to the related introduction in FIG. 24 below. For more detailed descriptions, refer to the foregoing related descriptions, which will not be repeated here.
- the head-mounted display device determines the focal length of the zoom lens to be adjusted.
- the head-mounted display device may include a display component and an optical imaging component, and the optical imaging component includes a zoom lens.
- the first focal length of the zoom lens can be determined first; the focal length of the zoom lens to be adjusted is determined according to the first focal length and the target position; and the voltage signal or current signal applied to the zoom lens is changed according to the focal length to be adjusted to adjust the virtual image to the target location.
- the head-mounted display device receives the to-be-adjusted focal length of the zoom lens sent by the terminal device.
- the head-mounted display device includes a display component and an optical imaging component, and the optical imaging component includes a zoom lens; the focal length to be adjusted of the zoom lens can be received from the terminal device; Voltage signal or current signal to adjust the virtual image to the target position.
- the present application provides another method for adjusting the position of a virtual image, please refer to the introduction of FIG. 23 .
- the method for adjusting the position of the virtual image can be applied to a head-mounted display device.
- FIG. 23 a schematic flowchart of another method for adjusting the position of a virtual image provided by the present application. The method includes the following steps:
- Step 2301 displaying a first interface.
- step 2301 reference may be made to the introduction of the foregoing step 2101, and details are not repeated here.
- Step 2302 When the user selects the first object in the first interface, acquire the target position of the virtual image corresponding to the first object.
- the target position of the virtual image is related to the preset scene type to which the first object belongs.
- step 2302 reference may be made to the relevant introduction of the foregoing step 2102, and details are not repeated here.
- Step 2303 for the image triggered to be displayed after the first object is selected, form a virtual image at the target position.
- step 2303 reference may be made to the introduction of the aforementioned step 2203, and details are not repeated here.
- this step 2103 may be executed by the control component of the display module, or may also be executed by the terminal device.
- the terminal device can control the head-mounted display device to adjust the position of the virtual image.
- FIG. 24 another method for adjusting the position of a virtual image provided by the present application can be applied to a terminal device. The method may include the steps of:
- Step 2401 Determine the first preset scene type to which the image displayed by the head-mounted display device belongs.
- the image displayed by the head-mounted display device may be propagated by the terminal device to the head-mounted display device. It can also be understood that the terminal device may transmit a light beam carrying the image information to the head-mounted display device, so that the head-mounted display device displays an image.
- the terminal device may transmit a light beam carrying the image information to the head-mounted display device, so that the head-mounted display device displays an image.
- Step 2402 Obtain the correspondence between the preset scene type and the position of the virtual image.
- the terminal device can receive the preset scene type and the virtual image sent by the head-mounted display device.
- the correspondence between the positions of the virtual images that is, the terminal device can call the correspondence between the preset scene types and the positions of the virtual images from the head-mounted device. If the corresponding relationship between the preset scene type and the position of the virtual image is stored in the terminal device, the corresponding relationship can be directly read from the memory of the terminal device. For the corresponding relationship between the preset scene type and the position of the virtual image, reference may be made to the foregoing related description, which will not be repeated here.
- Step 2403 Determine a target position where the head mounted display device corresponding to the first preset scene type presents the virtual image according to the corresponding relationship between the preset scene type and the position of the virtual image.
- the target position of the virtual image is related to the preset scene type to which the image belongs.
- the target position of the virtual image is related to the preset scene type to which the image belongs.
- Step 2404 Control the head-mounted display device to form a virtual image at the target position according to the target position.
- Method 1.1 sending a first control instruction to the head-mounted display device.
- the first distance between the display component and the optical imaging component in the head-mounted display device is obtained; according to the first distance and the target position, the to-be-moved display component and/or the optical imaging component is determined distance; generate a first control command according to the distance to be moved, and send the first control command to the head-mounted display device, where the first control command is used to control the movement of the display assembly and/or the optical imaging assembly to adjust the virtual image to the target position.
- the position of the optical imaging assembly and/or the display assembly sent by the virtual image position adjustment assembly in the head-mounted display device can be received, and according to the position of the optical imaging assembly and/or the display assembly, the first distance ( 17b and 17c); alternatively, the first distance between the display component and the optical imaging component may be directly determined (see FIG. 17a).
- Method 1.2 sending a second control instruction to the head-mounted display device.
- the first focal length of the optical imaging assembly in the head-mounted display device is acquired, and the focal length to be adjusted of the optical imaging assembly is determined according to the first focal length and the target position; and the second control is generated according to the focal length to be adjusted.
- the second control command is used to control the voltage signal or current signal applied to the optical imaging component, adjust the focal length of the optical imaging component, and adjust the virtual image to the target position.
- the terminal device can control the head-mounted display device to adjust the position of the virtual image.
- FIG. 25 another method for adjusting the position of a virtual image provided by the present application can be applied to a terminal device. The method may include the steps of:
- Step 2501 Acquire a first object selected by a user in a first interface displayed by the head-mounted display device.
- the head-mounted display device may send the identifier of the selected first object to the terminal device after detecting that the user selects the first object on the first interface.
- the first object identifier may be pre-agreed between the terminal device and the head-mounted display device; or it may be indicated to the terminal device by the head-mounted display; or the terminal device may pre-store the correspondence between the object identifier and the object.
- Step 2502 Obtain the second preset scene type to which the first object belongs.
- the relationship between the object and the preset scene type may be stored in advance, and then the second preset scene type to which the first object belongs may be determined from the corresponding relationship between the object and the preset scene.
- Step 2503 Obtain the correspondence between the preset scene type and the position of the virtual image.
- the terminal device can receive the corresponding relationship sent by the head-mounted display device, and retrieve the corresponding relationship from the head-mounted display device.
- the second preset scene type to which the first object belongs is determined from the corresponding relationship. If the corresponding relationship between the preset scene type and the position of the virtual image is stored in the terminal device, the corresponding relationship can be directly read from the memory of the terminal device, and the second preset scene to which the first object belongs is determined from the corresponding relationship type.
- the corresponding relationship between the preset scene type and the position of the virtual image reference may be made to the foregoing related description, which will not be repeated here.
- Step 2504 according to the corresponding relationship between the preset scene type and the position of the virtual image, determine the target position where the head-mounted display device corresponding to the second preset scene type presents the virtual image.
- the target position of the virtual image is related to the preset scene type to which the first object belongs.
- step 2504 reference may be made to the relevant description of the above-mentioned step 2302.
- Step 2505 Control the head-mounted display device to form a virtual image at the target position according to the target position.
- step 2505 reference may be made to the relevant introduction of the foregoing step 2404, and details are not repeated here.
- the image displayed by the head-mounted display device may also be propagated by the terminal device to the head-mounted display device.
- the present application provides yet another method for adjusting the position of a virtual image, please refer to FIG. 26 .
- the method for adjusting the position of the virtual image can be applied to a head-mounted display device. The method includes the following steps:
- Step 2601 determine the working mode of the virtual image position adjustment component; if the determined working mode is the automatic mode, go to steps 2603 to 2605 ; if the working mode is determined to be the manual mode, go to steps 2606 to 2608 .
- Step 2602 displaying the first interface.
- Step 2603 When the user selects the first object in the first interface, the target position of the virtual image is determined according to the acquired vision parameters and the second preset scene type to which the first object belongs.
- Step 2604 Determine focusing parameters of the virtual image position adjustment component according to the target position.
- the focusing parameters are, for example, the distance to be moved between the aforementioned optical imaging component and/or the display component, the voltage signal or current signal applied to the zoom lens, the distance to be moved between the first diffractive optical element and the second diffractive optical element.
- the rotation angle and the to-be-moved distance between the first refractive optical element and the second refractive optical element along a direction perpendicular to the main optical axis can be found in the foregoing related descriptions, which will not be repeated here.
- Step 2605 Adjust the virtual image to the target position according to the focusing parameters.
- Step 2606 when the user selects the first object in the first interface, prompt information may be displayed on the first interface.
- the prompt information can be used to prompt the user to adjust the position of the virtual image.
- the prompt information may prompt a preset scene type to which the first object belongs.
- Step 2607 the user can select the preset scene type to which the first object belongs through the cam focusing mechanism according to the prompt information, and adjust the position of the virtual image.
- the user can rotate the first knob of the cam focusing mechanism to select the preset scene type.
- the guide column or guide cylinder
- the guide column can be driven to drive the optical imaging assembly to move, and the position of the virtual image has been adjusted.
- Step 2608 the user can adjust the virtual image to the target position through the second knob of the cam focusing mechanism according to the vision parameters.
- step 2609 the image is rendered and displayed.
- the head-mounted display device and the terminal device include corresponding hardware structures and/or software modules for executing each function.
- modules and method steps of each example described in conjunction with the embodiments disclosed in the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is implemented by hardware or computer software-driven hardware depends on the specific application scenarios and design constraints of the technical solution.
- FIG. 27 and FIG. 28 are schematic structural diagrams of possible virtual image position adjustment devices provided by the present application. These virtual image position adjustment devices can be used to realize the function of the display module in the above method embodiments, and thus can also achieve the beneficial effects of the above method embodiments.
- the device for adjusting the position of the virtual image may include the display modules shown in FIGS. 3 to 18 c above, and the device for adjusting the position of the virtual image may be applied to a head-mounted display device.
- the virtual image position adjustment device 2700 includes an acquisition module 2701 and a virtual image formation module 2702 .
- the acquisition module 2701 is used to acquire the image displayed by the head-mounted display device, and to acquire the corresponding image of the image.
- the target position of the virtual image, the target position of the virtual image is related to the preset scene type to which the image belongs; the virtual image forming module 2702 is configured to form a virtual image of the image at the target position.
- the virtual image position adjustment device 2800 includes a display module 2801 , an acquisition module 2802 and a virtual image formation module 2803 .
- the display module 2801 is used to display the first interface
- the acquisition module 2802 is used to display the first interface when the user is in the first interface
- the target position of the virtual image corresponding to the first object is obtained, and the target position of the virtual image is related to the preset scene type to which the first object belongs
- the virtual image forming module 2803 is used for selecting the first object. An image triggered to be displayed after an object forms a virtual image at the target position.
- FIG. 29 and FIG. 30 are schematic structural diagrams of possible terminal devices provided by this application. These terminal devices can be used to implement the functions of the terminal devices in the foregoing method embodiments, and thus can also achieve the beneficial effects of the foregoing method embodiments.
- the terminal device 2900 includes a determination module 2901 , an acquisition module 2902 and a control module 2903 .
- the terminal device 2900 is used to implement the functions of the terminal device in the method embodiment shown in FIG.
- the determining module 2901 is used to determine the first preset scene type to which the image belongs, and the image is used for display on the head-mounted display device;
- the obtaining module 2902 is used to obtain the corresponding relationship between the preset scene type and the position of the virtual image;
- the determining module 2901 is further configured to determine the corresponding relationship of the first preset scene type according to the corresponding relationship between the preset scene type and the position of the virtual image.
- the head-mounted display device presents the target position of the virtual image, and the target position of the virtual image is related to the preset scene type to which the image belongs;
- the control module 2903 is configured to control the head-mounted display device according to the target position
- the image is formed into a virtual image at the target location.
- the terminal device 3000 includes a determination module 3001 , an acquisition module 3002 and a control module 3003 .
- the terminal device 3000 is used to implement the functions of the terminal device in the method embodiment shown in FIG.
- the obtaining module 3002 is used to obtain the first object selected by the user in the first interface displayed by the head-mounted display device; the second preset scene type to which the first object belongs; obtaining the correspondence between the preset scene type and the position of the virtual image; the determining module 3001 is configured to determine the second preset scene type according to the correspondence between the preset scene type and the position of the virtual image The target position at which the virtual image is presented by the head-mounted display device corresponding to the preset scene type, and the target position of the virtual image is related to the preset scene type to which the first object belongs; the control module 3003 is configured to, according to the target position, The head mounted display device is controlled to form a virtual image of the image at the target position.
- the terminal device may be a mobile phone, a tablet computer, or the like.
- the method steps in the embodiments of the present application may be implemented in a hardware manner, or may be implemented in a manner in which a processor executes software instructions.
- Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (programmable ROM) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), registers, hard disks, removable hard disks, CD-ROMs or known in the art in any other form of storage medium.
- An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium.
- the storage medium can also be an integral part of the processor.
- the processor and storage medium may reside in an ASIC.
- the ASIC may reside in a head mounted display device or an end device.
- the processor and the storage medium may also exist in the head-mounted display device or the terminal device as discrete components.
- the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
- software it can be implemented in whole or in part in the form of a computer program product.
- the computer program product includes one or more computer programs or instructions.
- the processes or functions described in the embodiments of the present application are executed in whole or in part.
- the computer may be a general purpose computer, a special purpose computer, a computer network, network equipment, user equipment, or other programmable apparatus.
- the computer program or instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer program or instructions may be downloaded from a website, computer, A server or data center transmits by wire or wireless to another website site, computer, server or data center.
- the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server, data center, or the like that integrates one or more available media.
- the usable medium can be a magnetic medium, such as a floppy disk, a hard disk, and a magnetic tape; it can also be an optical medium, such as a digital video disc (DVD); it can also be a semiconductor medium, such as a solid state drive (solid state drive). , SSD).
- a magnetic medium such as a floppy disk, a hard disk, and a magnetic tape
- an optical medium such as a digital video disc (DVD)
- DVD digital video disc
- it can also be a semiconductor medium, such as a solid state drive (solid state drive). , SSD).
- At least one means one or more
- plural means two or more.
- And/or which describes the association relationship of the associated objects, indicates that there can be three kinds of relationships, for example, A and/or B, which can indicate: the existence of A alone, the existence of A and B at the same time, and the existence of B alone, where A, B can be singular or plural.
- At least one item(s) below or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s).
- At least one (a) of a, b or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c” ", where a, b, c can be single or multiple.
- the character “/” generally indicates that the contextual objects are in an "or” relationship.
- the character “/” indicates that the related objects before and after are a “division" relationship.
- the symbol "(a, b)" represents an open interval, and the range is greater than a and less than b; "[a, b]” represents a closed interval, and the range is greater than or equal to a and less than or equal to b; "(a , b]” represents a half-open and half-closed interval, and the range is greater than a and less than or equal to b; "(a, b]” represents a half-open and half-closed interval, and the range is greater than a and less than or equal to b.
- the word "exemplary” is used to mean serving as an example, illustration, or illustration.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Optics & Photonics (AREA)
- Human Computer Interaction (AREA)
Abstract
一种显示模组、虚像的位置调节方法及装置。用于解决现有技术中因虚像的位置固定引起的辐辏调节冲突的问题。显示模组可以应用于头戴式显示设备,如增强现实AR眼镜、AR头盔、虚拟现实VR眼镜或VR头盔等。显示模组包括显示组件(301)、光学成像组件(302)和虚像位置调节组件(303);显示组件(301)用于显示图像;光学成像组件(302)用于将图像形成虚像;虚像位置调节组件(303)用于调节光学成像组件(302)和显示组件(301)中的至少一个,以将虚像调节至目标位置,虚像的目标位置与图像所属的预设场景类型有关。通过虚像位置调节组件(303)调节光学成像组件(302)和/或显示组件(301),以实现将图像所属的预设场景类型不同时,可以将虚像形成在不同的位置。可通过调节虚像的位置来减小辐辏调节冲突。
Description
相关申请的交叉引用
本申请要求在2020年12月24日提交中国专利局、申请号为202011554651.7、申请名称为“一种显示模组、虚像的位置调节方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及显示技术领域,尤其涉及一种显示模组、虚像的位置调节方法及装置。
随着科学技术的不断发展,在影视、游戏、网络教学、网络会议、数字展馆、社交和购物等领域对虚拟现实(virtual reality,VR)技术的需求日益旺盛。VR技术是指将虚拟与现实相互结合,利用显示光学产生一个三维空间的虚拟世界,提供用户关于视觉等感官的模拟,让用户感觉仿佛身历其境,可以即时、没有限制地观察三维空间内的事物。
但是越来越多的研究者发现人们在长时间观看相关内容的时候会有眼睛疲劳、视力模糊、头痛或头晕症状、甚至有个例发现长时间佩戴会造成斜视(esotropia)或者老化眼(hyperopic changes),尤其是观看三维(three dimensional,3D)内容。后来研究人员对这个舒适度问题进行了深入的分析,发现造成这种现象的主要因素之一是辐辏调节冲突(vergence and accommodation conflict,VAC)。
辐辏调节冲突是由于人眼在观察3D内容时,双眼的正确的晶状体调节距离始终固定在屏幕上,而双眼辐辏则会聚在由视差定义的目标距离上,可能位于屏幕前方,也可能位于屏幕后方,调节距离与辐辏距离的不匹配造成辐辏调节冲突。VAC是现在观看大部分3D内容会出现的现象,不管是用在近眼显示设备中观看还是用3D眼镜观看。
发明内容
本申请提供一种显示模组、虚像位置的调节方法及装置,基于不同预设场景类型自动调节虚像所处的位置,有助于减小辐辏调节冲突。
第一方面,本申请提供一种显示模组,该显示模组可包括显示组件、光学成像组件和虚像位置调节组件。其中,显示组件用于显示图像。光学成像组件用于将图像形成虚像。虚像位置调节组件用于调节光学成像组件和/或显示组件,从而可将虚像调节至目标位置,虚像的目标位置与图像所属的预设场景类型有关。例如光学成像组件可以改变携带图像的光线的传播路径,以将图像在目标位置形成虚像。
基于该方案,通过虚像位置调节组件调节光学成像组件和/或显示组件,从而可精确的将不同预设场景类型下的虚像调节至不同的位置,可以使得用户清晰的观看到显示模组显示的图像。基于不同预设场景类型自动调节虚像所处的位置,有助于减小辐辏调节冲突。
其中,图像所属的预设场景类型可以是图像的内容所属的预设场景类型;或者,也可以是图像对应的对象所属的预设场景类型。
在一种可能的实现方式中,显示模组还可包括控制组件,控制组件可用于获取虚像的目标位置,并控制虚像位置调节组件调节光学成像组件和/或显示组件,将虚像调节至目标位置。
通过控制组件控制虚像位置调节组件调节光学成像组件和/或显示组件,从而可实现将虚像调节至目标位置。
进一步,可选地,控制组件可用于获取显示组件显示的图像所属的第一预设场景类型以及预设场景类型与虚像的位置的对应关系,根据预设场景类型与虚像的位置的对应关系,确定第一预设场景类型对应的目标位置。
在另一种可能的实现方式中,控制组件用于获取视力参数、显示组件显示的图像所属的第一预设场景类型、以及预设场景类型与虚像的位置的对应关系;并根据视力参数、以及预设场景类型与虚像的位置的对应关系,确定第一预设场景类型对应的目标位置。
在一种可能的实现方式中,当图像属于不同的预设场景类型时,显示模组呈现虚像的目标位置不同。如此,可以将属于不同预设场景类型的图像在不同的目标位置形成虚像,从而有助于减小辐辏调节深度。
示例性地,预设场景类型可以为办公场景类型、阅读场景类型、会议场景类型、交互式游戏场景类型或视频场景类型。
当图像所属的预设场景类型为办公场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~10]屈光度D;当图像所属的预设场景类型为阅读场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.5~10]屈光度D;当图像所属的预设场景类型为会议场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7.1]屈光度D;当图像所属的预设场景类型为交互式游戏场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.5~7.5]屈光度D;当图像所属的预设场景类型为视频场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7]屈光度D。
进一步,可选地,当所述图像所属的预设场景类型为所述会议场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.1~3.0]屈光度D;或者,当所述图像所属的预设场景类型为所述交互式游戏场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[3.0~5.0]屈光度D;或者,当所述图像所属的预设场景类型为所述视频场景类型时,所述头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离为(5.0~7]屈光度D。
在一种可能的实现方式中,视频场景类型对应的虚像的目标位置与光学成像组件之间的距离大于会议场景类型对应的虚像的目标位置与光学成像组件之间的距离;或者,会议场景类型对应的虚像的目标位置与光学成像组件之间的距离大于阅读场景类型对应的虚像的目标位置与光学成像组件之间的距离。
在另一种可能的实现方式中,视频场景类型对应的虚像的目标位置与光学成像组件之间的距离大于会议场景类型对应的虚像的目标位置与光学成像组件之间的距离;或者,会议场景类型对应的虚像的目标位置与光学成像组件之间的距离大于办公场景类型对应的虚像的目标位置与光学成像组件之间的距离;或者,办公场景类型对应的虚像的目标位置与光学成像组件之间的距离大于阅读场景类型对应的虚像的目标位置与光学成像组件之间的距离。
在一种可能的实现方式中,所述虚像位置调节组件包括驱动组件;所述驱动组件用于驱动所述光学成像组件和/或所述显示组件移动,将所述虚像调节至所述目标位置。
在一种可能的实现方式中,虚像位置调节组件包括驱动组件和位置传感组件;其中,位置传感组件可用于确定光学成像组件和/或显示组件的位置,光学成像组件和/或显示组件的位置用于确定显示组件与光学成像组件之间的第一距离,第一距离用于确定光学成像组件和/或显示组件的待移动距离;或者,位置传感组件可用于确定光学成像组件和/或显示组件之间的第一距离。驱动组件可用于根据待移动的距离,驱动光学成像组件和/或显示组件移动,将虚像调节至目标位置。
在一种可能的实现方式中,虚像位置调节组件的调节精度是根据驱动组件的驱动误差和位置传感组件的位置测量误差确定的。
示例性地,虚像位置调节组件的调节精度不大于0.2屈光度D。进一步,可选地,所述光学成像组件包括半透半反镜,所述
所述
其中,所述r1为所述半透半反镜的透过面的最近似球面半径,所述r2为所述半透半反镜的半透半反面的最近似球面半径,所述n为所述半透半反镜的材料的折射率。
在一种可能的实现方式中,虚像位置调节组件的调节范围是根据驱动组件的驱动量程和位置传感组件的测量量程确定的。
示例性地,虚像位置调节组件的调节范围不小于5屈光度D。进一步,可选地,所述光学成像组件包括半透半反镜,所述
所述
其中,所述r1为所述半透半反镜的透过面的最近似球面半径,所述r2为所述半透半反镜的半透半反面的最近似球面半径,所述n为所述半透半反镜的材料的折射率。
在一种可能的实现方式中,所述虚像位置调节组件包括驱动组件,所述光学成像组件包括变焦透镜;所述驱动组件用于改变施加于变焦透镜的电压信号或电流信号,以改变所述变焦透镜的焦距,将虚像调节至所述目标位置。
进一步,可选地,变焦透镜可以为液晶透镜、液体透镜或者几何相位透镜。
在另一种可能的实现方式中,虚像位置调节组件包括驱动组件和位置传感组件,光学成像组件包括变焦透镜。其中,位置传感组件可用于确定变焦透镜的第一焦距,第一焦距用于确定变焦透镜的待调节焦距;驱动组件可用于根据待调节焦距,改变施加于变焦透镜的电压信号或电流信号,将虚像调节至目标位置。
在一种可能的实现方式中,所述虚像位置调节组件包括驱动组件和位置传感组件;所述光学成像组件包括第一衍射光学元件和第二衍射光学元件;所述位置传感组件用于确定所述第一衍射光学元件和所述第二衍射光学元件的相对角度,所述第一衍射光学元件和所述第二衍射光学元件的相对角度用于确定所述第一衍射光学元件和/或所述第二衍射光学元件待转动角度;所述驱动组件用于根据所述待转动角度,驱动所述第一衍射光学元件和 /或所述第二衍射光学元件转动,将所述虚像调节至所述目标位置。
在一种可能的实现方式中,所述虚像位置调节组件包括驱动组件和位置传感组件;所述光学成像组件包括第一折射光学元件和第二折射光学元件;所述位置传感组件用于在垂直于第一折射光学元件和第二折射光学元件的主光轴的方向上,确定所述第一折射光学元件和所述第二折射光学元件的之间的第一距离,所述第一距离用于确定所述第一折射光学元件和/或所述第二折射光学元件待移动距离;所述驱动组件于:根据所述待移动距离,驱动所述第一折射光学元件和/或所述第二折射光学元件在垂直于所述主光轴的方向移动,将所述虚像调节至所述目标位置。
在一种可能的实现方式中,所述显示模组还包括眼动追踪组件;所述眼动追踪组件用于确定双目注视所述图像的会聚深度;所述虚像位置调节组件用于根据所述会聚深度,驱动所述成像光学组件和/或所述显示组件移动,将所述虚像调节至所述目标位置。
通过上述虚像位置调节组件对虚像的位置的调节,可以使得用户清晰的观看到显示组件显示的图像,而且可有助于减小辐辏调节冲突。
在一种可能的实现方式中,所述虚像的目标位置与人眼之间的距离与人眼的双目会聚深度的差值的绝对值小于阈值。如此,通过将虚像调节至目标位置,从而有助于减小辐辏调节冲突。
进一步,可选地,所述阈值范围为[0屈光度D,1屈光度D]。
在一种可能的实现方式中,显示模组还可包括柱透镜和旋转驱动组件,旋转驱动组件用于改变柱透镜的光轴。
进一步,该柱透镜位于显示组件与光学成像组件之间,或者,位于光学成像组件远离显示组件的一侧。
第二方面,本申请提供一种虚像的位置调节方法,方法可应用于头戴式显示设备。其中,该方法可包括获取头戴式显示设备显示的图像、以及图像对应的虚像的目标位置,将图像在目标位置形成虚像,虚像的目标位置与图像所属的预设场景类型有关。
其中,图像所属的预设场景类型可以是图像的内容所属的预设场景类型;或者,也可以是图像对应的对象所属的预设场景类型。
在一种可能的实现方式中,当图像属于不同的预设场景类型时,头戴式显示设备呈现虚像的目标位置不同。
示例性地,预设场景类型包括办公场景类型、阅读场景类型、会议场景类型、交互式游戏场景类型或视频场景类型。
当图像所属的预设场景类型为办公场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~10]屈光度D;当图像所属的预设场景类型为阅读场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.5~10]屈光度D;当图像所属的预设场景类型为会议场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7.1]屈光度D;当图像所属的预设场景类型为交互式游戏场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.5~7.5]屈光度D;当图像所属的预设场景类型为视频场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7]屈光度D。
可基于头戴式显示设备是否包括控制组件,示例性地的示出两种获取图像对应的目标位置的方式。
方式一,基于头戴式显示设备包括控制组件。
在一种可能的实现方式中,可获取头戴式显示设备显示的图像所属的第一预设场景类型以及预设场景类型与虚像的位置的对应关系,并根据预设场景类型与虚像的位置的对应关系,确定第一预设场景类型对应的目标位置。
进一步,可选地,可接收终端设备发送的图像所属的第一预设场景类型。或者,也可以确定图像所属的第一预设场景类型。
方式二,基于头戴式显示设备不包括控制组件。
在一种可能的实现方式中,可接收终端设备发送的图像对应的虚像的目标位置。
如下,示例性地的示出了四种将图像在目标位置形成虚像的可能的实现方式。
实现方式1,头戴式显示设备确定显示组件和/或光学成像组件的待移动距离。
在一种可能的实现方式中,头戴式显示设备包括显示组件和光学成像组件。进一步,可获取显示组件与光学成像组件之间的第一距离,根据第一距离和目标位置,确定显示组件和/或光学成像组件的待移动距离,根据待移动距离,驱动显示组件和/或光学成像组件移动,将虚像调节至目标位置。
实现方式2,头戴式显示设备接收终端设备发送的显示组件和/或光学成像组件的待移动距离。
在一种可能的实现方式中,头戴式显示设备包括显示组件和光学成像组件。进一步,可选地,可接收终端设备发送的显示组件和/或光学成像组件的待移动的距离,根据待移动的距离,驱动显示组件和/或光学成像组件移动,将虚像调节至目标位置。
实现方式3,头戴式显示设备确定变焦透镜的待调节焦距。
在一种可能的实现方式中,头戴式显示设备包括显示组件和光学成像组件,光学成像组件包括变焦透镜。进一步,可选地,可确定变焦透镜的第一焦距,根据第一焦距和目标位置,确定变焦透镜的待调节焦距,根据待调节焦距,改变施加于变焦透镜的电压信号或电流信号,将虚像调节至目标位置。
实现方式4,头戴式显示设备接收终端设备发送的变焦透镜的待调节焦距。
在一种可能的实现方式中,头戴式显示设备包括显示组件和光学成像组件,光学成像组件包括变焦透镜。进一步,可选地,可接收终端设备发送的变焦透镜的待调节焦距;根据待调节焦距,改变施加于变焦透镜的电压信号或电流信号,将虚像调节至目标位置。
在一种可能的实现方式中,可获取视力参数、所述显示组件显示的所述图像所属的第一预设场景类型,以及预设场景类型与虚像的位置的对应关系;并根据所述视力参数、以及所述预设场景类型与虚像的位置的对应关系,确定所述第一预设场景类型对应的所述目标位置。
在一种可能的实现方式中,所述虚像的目标位置与人眼之间的距离与人眼的双目会聚深度的差值的绝对值小于阈值。
进一步,可选地,所述阈值范围为[0屈光度D,1屈光度D]。
在一种可能的实现方式中,确定所述虚像位置调节组件的工作模式,所述工作模式包括自动模式和手动模式,所述自动模式为驱动组件根据待移动距离或电压信号或电流信号,将所述虚像调节至所述目标位置;所述手动模式为用户通过旋转凸轮调焦机构将所述虚像调节至所述目标位置。
在一种可能的实现方式中,获取M个预设场景及所述M个预设场景分别对应的虚像 的位置,所述M为大于1的整数;统计所述M个预设场景与所述M个预设场景分别对应的虚像的位置的分布关系;根据所述分布关系,确定所述预设场景与虚像的位置的对应关系。
在一种可能的实现方式中,获取M个预设场景及所述M个预设场景分别对应的虚像的位置,所述M为大于1的整数;将所述M个预设场景与所述M个预设场景分别对应的虚像的位置的输入人工智能算法,得到所述预设场景与虚像的位置的对应关系。
进一步,可选地,接收用户输入的所述M个预设场景对应的虚像的位置;或者,获取M个预设场景中的图像的双目视差,分别根据所述M个预设场景中的图像的双目视场,确定所述M个预设场景对应的虚像的位置。
第三方面,本申请提供一种虚像的位置调节方法,该方法可应用于终端设备,该方法可包括确定图像所属的第一预设场景类型,图像用于头戴式显示设备显示;获取预设场景类型与虚像的位置的对应关系,并根据预设场景类型与虚像的位置的对应关系,确定第一预设场景类型对应的头戴式显示设备呈现虚像的目标位置,根据目标位置,控制头戴式显示设备将图像在目标位置形成虚像,虚像的目标位置与图像所属的预设场景类型有关。
如下,示例性地的示出了两种控制头戴式显示设备将图像在目标位置形成虚像的方法。
方法1.1,向头戴式显示设备发送第一控制指令。
在一种可能的实现方式中,获取头戴式显示设备中的显示组件与光学成像组件之间的第一距离;根据第一距离和目标位置,确定显示组件和/或光学成像组件的待移动距离;根据待移动距离生成第一控制指令,并向头戴式显示设备发送第一控制指令,第一控制指令用于控制显示组件和/或光学成像组件移动,将虚像调节至目标位置。
进一步,可选地,可接收头戴式显示设备中的虚像位置调节组件发送的光学成像组件和/或显示组件的位置;并根据光学成像组件和/或显示组件的位置,确定第一距离。
方法1.2,向头戴式显示设备发送第二控制指令。
在一种可能的实现方式中,获取头戴式显示设备中的光学成像组件的第一焦距,根据第一焦距和目标位置,确定光学成像组件的待调节焦距;根据待调节焦距生成第二控制指令,并向头戴式显示设备发送第二控制指令,第二控制指令用于控制施加于光学成像组件的电压信号或电流信号,调节光学成像组件的焦距,将虚像调节至目标位置。
第四方面,本申请提供一种虚像的位置调节方法,该方法可应用于头戴式显示设备。该方法可包括显示第一界面,当用户在第一界面中选择第一对象时,获取第一对象对应的虚像的目标位置,虚像的目标位置与第一对象所属的预设场景类型有关;针对选择第一对象后所触发显示的图像,将图像在目标位置形成虚像。
其中,对象可以是应用。
在一种可能的实现方式中,当第一对象属于不同的预设场景类型时,头戴式显示设备呈现虚像的目标位置不同。
在一种可能的实现方式中,预设场景类型包括办公场景类型、阅读场景类型、会议场景类型、交互式游戏场景类型或视频场景类型。
当图像所属的预设场景类型为办公场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~10]屈光度D;当图像所属的预设场景类型为阅读场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.5~10]屈光度D;当图像所属的预设场景类型为会议场景类型时,头戴式显示设备呈现虚像的目 标位置与光学成像组件之间的距离范围为[0.1~7.1]屈光度D;当图像所属的预设场景类型为交互式游戏场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.5~7.5]屈光度D;当图像所属的预设场景类型为视频场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7]屈光度D。
如下,示例性地的示出两种获取第一对象对应的目标位置的方式。
方式a,头戴式显示设备包括控制组件。
在一种可能的实现方式中,获取第一对象所属的第二预设场景类型及预设场景类型与虚像的位置的对应关系;根据预设场景类型与虚像的位置的对应关系,确定第二预设场景类型对应的目标位置。
方式b,头戴式显示设备不包括控制组件。
在一种可能的实现方式中,接收终端设备发送的第一对象对应的虚像的目标位置。
如下,示例性地的示出了四种将图像在目标位置形成虚像的可能的实现方式。
实现方式A,头戴式显示设备确定显示组件和/或光学成像组件的待移动距离。
在一种可能的实现方式中,头戴式显示设备包括显示组件和光学成像组件。进一步,可获取显示组件与光学成像组件之间的第一距离,根据第一距离和目标位置,确定显示组件和/或光学成像组件的待移动距离,根据待移动距离,驱动显示组件和/或光学成像组件移动,将虚像调节至目标位置。
实现方式B,头戴式显示设备接收终端设备发送的显示组件和/或光学成像组件的待移动距离。
在一种可能的实现方式中,头戴式显示设备包括显示组件和光学成像组件。进一步,可选地,可接收终端设备发送的显示组件和/或光学成像组件的待移动的距离,根据待移动的距离,驱动显示组件和/或光学成像组件移动,将虚像调节至目标位置。
实现方式C,头戴式显示设备确定变焦透镜的待调节焦距。
在一种可能的实现方式中,头戴式显示设备包括显示组件和光学成像组件,光学成像组件包括变焦透镜。进一步,可选地,可确定变焦透镜的第一焦距,根据第一焦距和目标位置,确定变焦透镜的待调节焦距,根据待调节焦距,改变施加于变焦透镜的电压信号或电流信号,将虚像调节至目标位置。
实现方式D,头戴式显示设备接收终端设备发送的变焦透镜的待调节焦距。
在一种可能的实现方式中,头戴式显示设备包括显示组件和光学成像组件,光学成像组件包括变焦透镜。进一步,可选地,可接收终端设备发送的变焦透镜的待调节焦距;根据待调节焦距,改变施加于变焦透镜的电压信号或电流信号,将虚像调节至目标位置。
在另一种可能的实现方式中,可获取视力参数、所述第一对象所属的第二预设场景类型、以及预设场景类型与虚像的位置的对应关系;并根据所述视力参数、以及所述预设场景类型与虚像的位置的对应关系,确定所述第二预设场景类型对应的所述目标位置。
在一种可能的实现方式中,所述虚像的目标位置与人眼之间的距离与人眼的双目会聚深度的差值的绝对值小于阈值。进一步,可选地,所述阈值范围为[0屈光度D,1屈光度D]。
在一种可能的实现方式中,所述虚像的目标位置与人眼之间的距离与人眼的双目会聚深度的差值的绝对值小于阈值。
在一种可能的实现方式中,所述阈值范围为[0屈光度D,1屈光度D]。
在一种可能的实现方式中,确定所述虚像位置调节组件的工作模式,所述工作模式包括自动模式和手动模式,所述自动模式为驱动组件根据待移动距离或电压信号或电流信号,将所述虚像调节至所述目标位置;所述手动模式为用户通过旋转凸轮调焦机构将所述虚像调节至所述目标位置。
在一种可能的实现方式中,获取M个预设场景及所述M个预设场景分别对应的虚像的位置,所述M为大于1的整数;统计所述M个预设场景与所述M个预设场景分别对应的虚像的位置的分布关系;根据所述分布关系,确定所述预设场景与虚像的位置的对应关系。
在一种可能的实现方式中,获取M个预设场景及所述M个预设场景分别对应的虚像的位置,所述M为大于1的整数;将所述M个预设场景与所述M个预设场景分别对应的虚像的位置的输入人工智能算法,得到所述预设场景与虚像的位置的对应关系。
第五方面,本申请提供一种虚像的位置调节方法,该方法可以应用于终端设备。该方法可包括:获取用户在头戴式显示设备显示的第一界面中选择的第一对象、第一对象所属的第二预设场景类型、以及预设场景类型与虚像的位置的对应关系;根据预设场景类型与虚像的位置的对应关系,确定第二预设场景类型对应的头戴式显示设备呈现虚像的目标位置;根据目标位置,控制头戴式显示设备将图像在目标位置形成虚像,虚像的目标位置与第一对象所属的预设场景类型有关。
如下,示例性地的示出了两种控制头戴式显示设备将图像在目标位置形成虚像的方法。
方法2.1,向头戴式显示设备发送第一控制指令。
在一种可能的实现方式中,获取头戴式显示设备中的显示组件与光学成像组件之间的第一距离,根据第一距离和目标位置,确定显示组件和/或光学成像组件的待移动距离;根据待移动距离生成第一控制指令,并向头戴式显示设备发送第一控制指令,第一控制指令用于控制显示组件和/或光学成像组件移动,将虚像调节至目标位置。
进一步,可选地,可接收头戴式显示设备中的虚像位置调节组件发送的光学成像组件和/或显示组件的位置;根据光学成像组件和/或显示组件的位置,确定第一距离。
方法1.2,向头戴式显示设备发送第二控制指令。
在一种可能的实现方式中,获取头戴式显示设备中的光学成像组件的第一焦距,根据第一焦距和目标位置,确定光学成像组件的待调节焦距;根据待调节焦距生成第二控制指令,并向头戴式显示设备发送第二控制指令,第二控制指令用于控制施加于光学成像组件的电压信号或电流信号,调节光学成像组件的焦距,将虚像调节至目标位置。
第六方面,本申请提供一种虚像的位置调节方法,应用于显示模组,显示模组可包括显示组件、光学成像组件和虚像位置调节组件,显示组件用于显示图像,光学成像组件用于将图像形成虚像,虚像位置调节组件用于调节光学成像组件和/或显示组件;该方法可包括:获取显示组件显示的图像、以及图像对应的虚像的目标位置,虚像的目标位置与图像所属的预设场景类型有关;控制虚像位置调节组件调节光学成像组件和/或显示组件,将图像在目标位置形成虚像。
其中,图像所属的预设场景类型可以是图像的内容所属的预设场景类型;或者,也可以是图像对应的对象所属的预设场景类型。
在一种可能的实现方式中,当图像属于不同的预设场景类型时,显示模组呈现虚像的目标位置不同。
示例性地,预设场景类型包括办公场景类型、阅读场景类型、会议场景类型、交互式游戏场景类型或视频场景类型。
当图像所属的预设场景类型为办公场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~10]屈光度D;当图像所属的预设场景类型为阅读场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.5~10]屈光度D;当图像所属的预设场景类型为会议场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7.1]屈光度D;当图像所属的预设场景类型为交互式游戏场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.5~7.5]屈光度D;当图像所属的预设场景类型为视频场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7]屈光度D。
可基于显示模组是否包括控制组件,示例性地的示出两种获取图像对应的目标位置的方式。
方式一,基于显示模组包括控制组件。
在一种可能的实现方式中,控制组件可获取显示模组显示的图像所属的第一预设场景类型以及预设场景类型与虚像的位置的对应关系,并根据预设场景类型与虚像的位置的对应关系,确定第一预设场景类型对应的目标位置。
进一步,可选地,控制组件可接收终端设备发送的图像所属的第一预设场景类型。或者,也可以是控制组件确定图像所属的第一预设场景类型。
方式二,基于显示模组不包括控制组件。
在一种可能的实现方式中,可接收终端设备发送的图像对应的虚像的目标位置。
如下,示例性地的示出了四种将图像在目标位置形成虚像的可能的实现方式。
实现方式1,确定显示组件和/或光学成像组件的待移动距离。
在一种可能的实现方式中,可获取显示组件与光学成像组件之间的第一距离,根据第一距离和目标位置,确定显示组件和/或光学成像组件的待移动距离,根据待移动距离,驱动显示组件和/或光学成像组件移动,将虚像调节至目标位置。
实现方式2,接收终端设备发送的显示组件和/或光学成像组件的待移动距离。
在一种可能的实现方式中,可接收终端设备发送的显示组件和/或光学成像组件的待移动的距离,并根据待移动的距离,驱动显示组件和/或光学成像组件移动,将虚像调节至目标位置。
实现方式3,光学成像组件包括变焦透镜,确定变焦透镜的待调节焦距。
在一种可能的实现方式中,可确定变焦透镜的第一焦距,根据第一焦距和目标位置,确定变焦透镜的待调节焦距,根据待调节焦距,改变施加于变焦透镜的电压信号或电流信号,将虚像调节至目标位置。
实现方式4,光学成像组件包括变焦透镜,接收终端设备发送的变焦透镜的待调节焦距。
在一种可能的实现方式中,可接收终端设备发送的变焦透镜的待调节焦距,根据待调节焦距,改变施加于变焦透镜的电压信号或电流信号,将虚像调节至目标位置。
在一种可能的实现方式中,可获取视力参数、所述显示组件显示的所述图像所属的第一预设场景类型,以及预设场景类型与虚像的位置的对应关系;并根据所述视力参数、以及所述预设场景类型与虚像的位置的对应关系,确定所述第一预设场景类型对应的所述目 标位置。
在一种可能的实现方式中,所述虚像的目标位置与人眼之间的距离与人眼的双目会聚深度的差值的绝对值小于阈值。进一步,可选地,所述阈值范围为[0屈光度D,1屈光度D]。
第七方面,本申请提供一种虚像的位置调节方法,应用于显示模组,显示模组包括显示组件、光学成像组件和虚像位置调节组件,显示组件用于显示图像,光学成像组件用于将图像形成虚像,虚像位置调节组件用于调节光学成像组件和/或显示组件;方法包括:显示第一界面,当用户在第一界面中选择第一对象时,获取第一对象对应的虚像的目标位置,虚像的目标位置与第一对象所属的预设场景类型有关;针对选择第一对象后所触发显示组件显示的图像,控制虚像位置调节组件调节光学成像组件和/或显示组件,将图像在目标位置形成虚像。
其中,对象可以是应用。
在一种可能的实现方式中,当第一对象属于不同的预设场景类型时,显示模组呈现虚像的目标位置不同。
在一种可能的实现方式中,预设场景类型包括办公场景类型、阅读场景类型、会议场景类型、交互式游戏场景类型或视频场景类型。
当图像所属的预设场景类型为办公场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~10]屈光度D;当图像所属的预设场景类型为阅读场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.5~10]屈光度D;当图像所属的预设场景类型为会议场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7.1]屈光度D;当图像所属的预设场景类型为交互式游戏场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.5~7.5]屈光度D;当图像所属的预设场景类型为视频场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7]屈光度D。
如下,示例性地的示出两种获取第一对象对应的目标位置的方式。
方式a,显示模组包括控制组件。
在一种可能的实现方式中,获取第一对象所属的第二预设场景类型及预设场景类型与虚像的位置的对应关系;根据预设场景类型与虚像的位置的对应关系,确定第二预设场景类型对应的目标位置。
方式b,显示模组不包括控制组件。
在一种可能的实现方式中,接收终端设备发送的第一对象对应的虚像的目标位置。
如下,示例性地的示出了四种将图像在目标位置形成虚像的可能的实现方式。
实现方式A,确定显示组件和/或光学成像组件的待移动距离。
在一种可能的实现方式中,可获取显示组件与光学成像组件之间的第一距离,根据第一距离和目标位置,确定显示组件和/或光学成像组件的待移动距离,根据待移动距离,驱动显示组件和/或光学成像组件移动,将虚像调节至目标位置。
实现方式B,可接收终端设备发送的显示组件和/或光学成像组件的待移动距离。
在一种可能的实现方式中,可接收终端设备发送的显示组件和/或光学成像组件的待移动的距离,根据待移动的距离,驱动显示组件和/或光学成像组件移动,将虚像调节至目标位置。
实现方式C,光学成像组件包括变焦透镜,确定变焦透镜的待调节焦距。
在一种可能的实现方式中,可确定变焦透镜的第一焦距,根据第一焦距和目标位置,确定变焦透镜的待调节焦距,根据待调节焦距,改变施加于变焦透镜的电压信号或电流信号,将虚像调节至目标位置。
实现方式D,光学成像组件包括变焦透镜,可接收终端设备发送的变焦透镜的待调节焦距。
在一种可能的实现方式中,可接收终端设备发送的变焦透镜的待调节焦距;根据待调节焦距,改变施加于变焦透镜的电压信号或电流信号,将虚像调节至目标位置。
在另一种可能的实现方式中,可获取视力参数、所述第一对象所属的第二预设场景类型、以及预设场景类型与虚像的位置的对应关系;并根据所述视力参数、以及所述预设场景类型与虚像的位置的对应关系,确定所述第二预设场景类型对应的所述目标位置。
在一种可能的实现方式中,所述虚像的目标位置与人眼之间的距离与人眼的双目会聚深度的差值的绝对值小于阈值。进一步,可选地,所述阈值范围为[0屈光度D,1屈光度D]。
第八方面,本申请提供一种虚像的位置调节装置,该虚像的位置调节装置用于实现上述第二方面或第二方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
在一种可能的实现方式中,该虚像的位置调节装置可以应用于头戴式显示设备,可以括获取模块和虚像形成模块。其中,获取模块用于获取头戴式显示设备显示的图像、以及图像对应的虚像的目标位置,虚像形成模块用于将图像在目标位置形成虚像,虚像的目标位置与图像所属的预设场景类型有关。
在一种可能的实现方式中,当图像属于不同的预设场景类型时,虚像的位置调节装置呈现虚像的目标位置不同。
在一种可能的实现方式中,预设场景类型包括办公场景类型、阅读场景类型、会议场景类型、交互式游戏场景类型或视频场景类型。
当图像所属的预设场景类型为办公场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~10]屈光度D;当图像所属的预设场景类型为阅读场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.5~10]屈光度D;当图像所属的预设场景类型为会议场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7.1]屈光度D;当图像所属的预设场景类型为交互式游戏场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.5~7.5]屈光度D;当图像所属的预设场景类型为视频场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7]屈光度D。
在一种可能的实现方式中,图像所属的预设场景类型包括以下任一项:图像的内容所属的预设场景类型;或者,图像对应的对象所属的预设场景类型。
在一种可能的实现方式中,获取模块用于获取头戴式显示设备显示的图像所属的第一预设场景类型;获取预设场景类型与虚像的位置的对应关系;根据预设场景类型与虚像的位置的对应关系,确定第一预设场景类型对应的目标位置。
在一种可能的实现方式中,获取模块用于接收终端设备发送的图像所属的第一预设场 景类型;或者,确定图像所属的第一预设场景类型。
在一种可能的实现方式中,获取模块用于:接收终端设备发送的图像对应的虚像的目标位置。
在一种可能的实现方式中,获取模块用于获取头戴式显示设备中的显示组件与光学成像组件之间的第一距离;根据第一距离和目标位置,确定显示组件和/或光学成像组件的待移动距离;虚像形成模块用于根据待移动距离,驱动虚像的位置调节装置包括的显示组件和/或光学成像组件移动,将虚像调节至目标位置。
在一种可能的实现方式中,获取模块用于接收终端设备发送的头戴式显示设备中的显示组件和/或光学成像组件的待移动的距离;虚像形成模块用于根据待移动的距离,驱动显示组件和/或光学成像组件移动,将虚像调节至目标位置。
在一种可能的实现方式中,获取模块用于确定头戴式显示设备中的变焦透镜的第一焦距;根据第一焦距和目标位置,确定变焦透镜的待调节焦距;虚像形成模块用于根据待调节焦距,改变施加于变焦透镜的电压信号或电流信号,将虚像调节至目标位置。
在一种可能的实现方式中,获取模块用于接收终端设备发送的头戴式显示设备中的变焦透镜的待调节焦距;虚像形成模块用于根据待调节焦距,改变施加于变焦透镜的电压信号或电流信号,将虚像调节至目标位置。
在一种可能的实现方式中,获取模块用于获取视力参数、头戴式显示设备显示的图像所属的第一预设场景类型、预设场景类型与虚像的位置的对应关系;根据视力参数、以及预设场景类型与虚像的位置的对应关系,确定第一预设场景类型对应的目标位置。
在一种可能的实现方式中,所述虚像的目标位置与人眼之间的距离与人眼的双目会聚深度的差值的绝对值小于阈值。进一步,可选地,所述阈值范围为[0屈光度D,1屈光度D]。
第九方面,本申请提供一种虚像的位置调节装置,该虚像的位置调节装置用于实现上述第三方面或第三方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
在一种可能的实现方式中,该虚像的位置调节装置可以应用于终端设备,该虚像的位置调节装置可以括确定模块、获取模块和控制模块。其中,确定模块用于确定图像所属的第一预设场景类型,图像用于头戴式显示设备显示;获取模块用于获取预设场景类型与虚像的位置的对应关系;确定模块还用于根据预设场景类型与虚像的位置的对应关系,确定第一预设场景类型对应的头戴式显示设备呈现虚像的目标位置,虚像的目标位置与图像所属的预设场景类型有关;控制模块用于根据目标位置,控制头戴式显示设备将图像在目标位置形成虚像。
在一种可能的实现方式中,获取模块用于获取头戴式显示设备中的显示组件与光学成像组件之间的第一距离;确定模块用于根据第一距离和目标位置,确定显示组件和/或光学成像组件的待移动距离;控制模块用于根据待移动距离生成第一控制指令,并向头戴式显示设备发送第一控制指令,第一控制指令用于控制显示组件和/或光学成像组件移动,将虚像调节至目标位置。
在一种可能的实现方式中,获取模块用于接收头戴式显示设备发送的光学成像组件和/或显示组件的位置;确定模块用于根据光学成像组件和/或显示组件的位置,确定第一距 离。
在一种可能的实现方式中,获取模块用于获取头戴式显示设备中的变焦透镜的第一焦距;确定模块用于根据第一焦距和目标位置,确定变焦透镜的待调节焦距;控制模块用于根据待调节焦距生成第二控制指令,并向头戴式显示设备发送第二控制指令,第二控制指令用于控制施加于变焦透镜的电压信号或电流信号,调节变焦透镜的焦距,将虚像调节至目标位置。
第十方面,本申请提供一种虚像的位置调节装置,该虚像的位置调节装置用于实现上述第四方面或第四方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
在一种可能的实现方式中,该虚像的位置调节装置可以应用于头戴式显示设备,该虚像的位置调节装置可以括显示模块、获取模块和虚像形成模块;显示模块用于显示第一界面,当用户在第一界面中选择第一对象时,获取模块用于获取第一对象对应的虚像的目标位置,虚像的目标位置与第一对象所属的预设场景类型有关;针对选择第一对象后所触发显示的图像,虚像形成模块用于将图像在目标位置形成虚像。
在一种可能的实现方式中,当第一对象属于不同的预设场景类型时,头戴式显示设备呈现虚像的目标位置不同。
在一种可能的实现方式中,预设场景类型包括办公场景类型、阅读场景类型、会议场景类型、交互式游戏场景类型或视频场景类型。
当图像所属的预设场景类型为办公场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~10]屈光度D;当图像所属的预设场景类型为阅读场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.5~10]屈光度D;当图像所属的预设场景类型为会议场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7.1]屈光度D;当图像所属的预设场景类型为交互式游戏场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.5~7.5]屈光度D;当图像所属的预设场景类型为视频场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7]屈光度D。
在一种可能的实现方式中,第一对象为应用。
在一种可能的实现方式中,获取模块用于获取第一对象所属的第二预设场景类型;以及预设场景类型与虚像的位置的对应关系;根据预设场景类型与虚像的位置的对应关系,确定第二预设场景类型对应的目标位置。
在一种可能的实现方式中,获取模块用于接收终端设备发送的第一对象对应的虚像的目标位置。
在一种可能的实现方式中,获取模块用于获取头戴式显示设备中的显示组件与光学成像组件之间的第一距离;根据第一距离和目标位置,确定显示组件和/或光学成像组件的待移动距离;虚像形成模块用于根据待移动距离,驱动显示组件和/或光学成像组件移动,将虚像调节至目标位置。
在一种可能的实现方式中,获取模块用于接收终端设备发送的头戴式显示设备中的显示组件和/或光学成像组件的待移动的距离;虚像形成模块用于根据待移动的距离,驱动显示组件和/或光学成像组件移动,将虚像调节至目标位置。
在一种可能的实现方式中,获取模块用于确定头戴式显示设备中的变焦透镜的第一焦距;根据第一焦距和目标位置,确定变焦透镜的待调节焦距;虚像形成模块用于根据待调节焦距,改变施加于变焦透镜的电压信号或电流信号,将虚像调节至目标位置。
在一种可能的实现方式中,获取模块用于接收终端设备发送的头戴式显示设备中的变焦透镜的待调节焦距;虚像形成模块用于根据待调节焦距,改变施加于变焦透镜的电压信号或电流信号,将虚像调节至目标位置。
在一种可能的实现方式中,获取模块用于获取视力参数、第一对象所属的第二预设场景类型、以及获取预设场景类型与虚像的位置的对应关系;根据视力参数、以及预设场景类型与虚像的位置的对应关系,确定第二预设场景类型对应的目标位置。
第十一方面,本申请提供一种虚像的位置调节装置,该虚像的位置调节装置用于实现上述第五方面或第五方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
在一种可能的实现方式中,该虚像的位置调节装置可以是终端设备,可以包括获取模块、确定模块和控制模块;获取模块用于获取用户在头戴式显示设备显示的第一界面中选择的第一对象、第一对象所属的第二预设场景类型、以及预设场景类型与虚像的位置的对应关系;确定模块用于根据预设场景类型与虚像的位置的对应关系,确定第二预设场景类型对应的头戴式显示设备呈现虚像的目标位置,虚像的目标位置与第一对象所属的预设场景类型有关;控制模块用于根据目标位置,控制头戴式显示设备将图像在目标位置形成虚像。
在一种可能的实现方式中,获取模块用于获取头戴式显示设备中的显示组件与光学成像组件之间的第一距离;确定模块用于根据第一距离和目标位置,确定显示组件和/或光学成像组件的待移动距离;控制模块用于:根据待移动距离生成第一控制指令,并向头戴式显示设备发送第一控制指令,第一控制指令用于控制显示组件和/或光学成像组件移动,将虚像调节至目标位置。
在一种可能的实现方式中,获取模块用于接收头戴式显示设备发送的光学成像组件和/或显示组件的位置;确定模块用于根据光学成像组件和/或显示组件的位置,确定第一距离。
在一种可能的实现方式中,获取模块用于获取头戴式显示设备中的变焦透镜的第一焦距;确定模块用于根据第一焦距和目标位置,确定变焦透镜的待调节焦距;控制模块用于根据待调节焦距生成第二控制指令,并向头戴式显示设备发送第二控制指令,第二控制指令用于控制施加于变焦透镜的电压信号或电流信号,调节变焦透镜的焦距,将虚像调节至目标位置。
上述第二方面至第十一方面中任一方面可以达到的技术效果可以参照上述第一方面中有益效果的描述,此处不再重复赘述。
第十二方面,本申请提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令被头戴式显示设备执行时,使得该头戴式显示设备执行上述第二方面或第二方面的任意可能的实现方式中的方法;或者使得该头戴式显示设备执行上述第四方面或第四方面的任意可能的实现方式中的方法。
第十三方面,本申请提供一种计算机可读存储介质,计算机可读存储介质中存储有计 算机程序或指令,当计算机程序或指令被终端设备执行时,使得该终端设备执行上述第三方面或第三方面的任意可能的实现方式中的方法;或者使得该终端设备执行上述第五方面或第五方面的任意可能的实现方式中的方法。
第十四方面,本申请提供一种计算机程序产品,该计算机程序产品包括计算机程序或指令,当该计算机程序或指令被终端设备执行时,实现上述第二方面或第二方面的任意可能的实现方式中的方法,或者实现上述第四方面或第四方面的任意可能的实现方式中的方法。
第十五方面,本申请提供一种计算机程序产品,该计算机程序产品包括计算机程序或指令,当该计算机程序或指令被终端设备执行时,实现上述第三方面或第三方面的任意可能的实现方式中的方法,或者实现上述第五方面或第五方面的任意可能的实现方式中的方法。
图1a为本申请提供的一种物距像距关系示意图;
图1b为本申请提供的一种三角测距激光雷达的光路图的示意图;
图1c为本申请提供的一种辐辏调节冲突的原理示意图;
图2a为本申请提供的一种应用场景示意图;
图2b为本申请提供的一种应用场景与虚像的目标位置关系示意图;
图3为本申请提供的一种显示模组的结构示意图;
图4a为本申请提供的一种第一界面的示意图;
图4b为本申请提供的一种应用场景的设置界面的示意图;
图4c为本申请提供的一种第三界面的示意图;
图4d为本申请提供的一种第二界面的示意图;
图4e为本申请提供的另一种第二界面的示意图;
图4f为本申请提供的一种第四界面的示意图;
图5为本申请提供的一种卡环固定第一透镜的示意图;
图6a为本申请提供的一种光学成像组件的结构示意图;
图6b为本申请提供的一种光学成像组件的光路示意图;
图6c为本申请提供的一种镜筒固定光学成像组件的示意图;
图6d为本申请提供的一种半透半反镜的结构示意图;
图7为本申请提供的另一种光学成像组件的结构示意图;
图8为本申请提供的又一种光学成像组件的结构示意图;
图9为本申请提供的又一种光学成像组件的结构示意图;
图10为本申请提供的又一种光学成像组件的结构示意图;
图11为本申请提供的又一种光学成像组件的结构示意图;
图12a为本申请提供的一种液晶透镜的结构示意图;
图12b为本申请提供的一种液晶透镜的结构示意图;
图12c为本申请提供的一种液晶透镜的结构示意图;
图13a为本申请提供的一种改变入射光的偏振态的示意图;
图13b为本申请提供的一种电控扭曲液晶改变入射光的偏振态的结构示意图;
图14a为本申请提供的一种液体透镜的结构示意图;
图14b为本申请提供的一种液体透镜的结构示意图;
图15为本申请提供的一种可变形反射镜的结构示意图;
图16a为本申请提供的一种移动显示组件且光学成像组件不动的示意图;
图16b为本申请提供的一种显示组件不动且移动光学成像组件的示意图;
图16c为本申请提供的一种移动显示组件且移动光学成像组件的示意图;
图17a为本申请提供的一种显示模组的结构示意图;
图17b为本申请提供的一种显示模组的结构示意图;
图17c为本申请提供的一种显示模组的结构示意图;
图17d为本申请提供的一种光学成像组件移动距离与虚像移动距离的关系示意图;
图18a为本申请提供的一种第一旋钮的结构示意图;
图18b为本申请提供的一种凸轮调焦机构的结构的示意图;
图18c为本申请提供的一种第二旋钮的结构示意图;
图19为本申请提供的又一种显示组件的结构示意图;
图20为本申请提供的又一种虚像位置调节方法流程示意图;
图21为本申请提供的又一种虚像位置调节方法流程示意图;
图22为本申请提供的又一种虚像位置调节方法流程示意图;
图23为本申请提供的又一种虚像位置调节方法流程示意图;
图24为本申请提供的又一种虚像位置调节方法流程示意图;
图25为本申请提供的又一种虚像位置调节方法流程示意图;
图26为本申请提供的又一种虚像位置调节方法流程示意图;
图27为本申请提供的一种虚像的位置调节装置的结构示意图;
图28为本申请提供的一种虚像的位置调节装置的结构示意图;
图29为本申请提供的一种终端设备的结构示意图;
图30为本申请提供的一种终端设备的结构示意图。
下面将结合附图,对本申请实施例进行详细描述。
以下,对本申请中的部分用语进行解释说明。需要说明的是,这些解释是为了便于本领域技术人员理解,并不是对本申请所要求的保护范围构成限定。
一、近眼显示
在距离眼睛近处显示,是AR显示设备或VR显示设备的一种显示方式。
二、虚像的位置
物体发出的光线经折射或反射后,光路改变,人眼看到折射或反射后的光线,会感觉光线来自其反向延长线交点的位置,反向延长线相交而成的像就是虚像,虚像所在的位置称为虚像的位置,虚像所在的平面称为虚像面。虚像所在的位置与人眼之间的距离即为聚焦深度。应理解,虚像所在的位置并没有实际物体,也没有光线会聚。例如,平面镜、眼镜所成的像均是虚像。
三、多焦面显示
根据虚拟物体(即虚像)在虚拟空间中的远近位置,将其对应投影至两个或两个以上 位置,可以采用时分复用的方式显示。
四、自适应焦面显示
自适应焦面显示是指可自动模拟人眼在观察远近不同物体时发生的屈光调节和双目辐辏调节过程。
五、眼动追踪设备
眼动追踪是指通过测量眼睛的注视点的位置或者眼球相对头部的运动而实现对眼球运动的追踪。眼动追踪设备是一种能够跟踪测量眼球位置及眼球运动信息的一种设备。
六、老花
老花是指眼球晶状体逐渐硬化、增厚,而且眼部肌肉的调节能力也随之减退,导致变焦能力降低。通常老花最多为300~350度。
七、散光
散光是眼睛的一种屈光不正的表现,与角膜的弧度有关。眼角膜在某一角度区域的弧度较弯,而另一些角度区域则较扁平,不是一个圆对称的曲面。
八、半透半反镜(semi-transparent and semi-reflective mirro)
半透半反镜又可称为分光镜、分光片、或半反半透镜。是一种在光学玻璃上镀制半反射膜,或者在透镜的一个光学面上镀制半透半反膜,以改变入射光束原来的透射和反射的比例的光学元件。通过镀制膜层可以增透,加大光强;也可以增反,减少光强。例如半透半反镜可以按照50:50的比例透射和反射入射光。即,半透半反镜的透射率和反射率各50%。当入射光经过半透半反镜后,其透过的光强和被反射回来的光强各占50%。当然,可根据具体的需求选择反射率和透射率,例如反射率可以高于50%,透射率低于50%;或者也可以反射率低于50%,透射率低于50%。
九、光焦度(focal power)
光焦度等于像方光束会聚度与物方光束会聚度之差,它表征光学系统偏折光线的能力。光焦度常用字母φ表示,折射球面光焦度φ=(n'-n)/r=n'/p'=-n/q,其中n'为像方折射率,n为物方折射率,r为球面半径,p为像距,q为物距。一般光焦度表示为像方焦距的倒数(近似认为空气的折射率为1)。光焦度的单位为屈光度(D),1屈光度(D)=1m
-1。例如,眼镜的度数=屈光度×100。
十、1/4波片
1/4波片为一种双折射光学器件,包括快轴和慢轴两个光轴,可用于使得沿快轴和慢轴的线偏振光透过该1/4波片后产生π/2的相位差。
十一、反射偏振片(reflective polarizer,RP)
反射偏振片可以用于透射一个偏振态的光且可反射另一偏振态的光。例如,可以是多层介质膜的偏振片或者金属线栅的偏振片。
前文介绍了本申请所涉及到的一些用语,下面介绍本申请涉及的技术特征。需要说明的是,这些解释是为了便于本领域技术人员理解,并不是对本申请所要求的保护范围构成限定。
下面分别对本申请涉及到的调焦原理、三角测距激光雷达原理及辐辏调节冲突(vergence and accommodation conflict,VAC)进行介绍。
如图1a所示,光学成像组件的中心与显示屏的中心之间的距离称为物距p,成像透镜 组的中心与虚像之间的距离称为像距q,光学成像组件的等效焦距f,其中,物距p、像距q和等效焦距f之间满足下述公式(1)。
当物距p和/或等效焦距f发生变化时,均可以改变像距q,Δp为物距p的变化量,Δq为像距q的变化量,对公式(1)的两边取微分,可得到下述公式(2)。
三角测距激光雷达是基于测量光的射出路径与反射路径构成的三角形,并运用三角公式推导被测目标的距离。其工作原理是:通过激光发射器发出一束激光信号,该激光经被测目标反射后由激光接收器接收成像在位置传感器(例如感光耦合组件(charge-coupled Device,CCD))中,由于激光发射器和接收器间隔了一段距离,所以依照光学路径,不同距离的物体将会成像在CCD上不同的位置,再按照三角公式进行计算,推导出被测目标的距离,参见下面图1b。激光器1发射的激光束被透镜2聚焦到被测目标6,经被测目标6反射后的反射光被透镜3会聚至CCD阵列4上,信号处理器5通过三角函数计算CCD阵列4上的光斑的位移大小,从而可得到被测目标移动的距离。
如图1c所示,示例性地示出了辐辏调节冲突的原理。辐辏调节冲突是由于人眼在观察三维(three dimensional,3D)内容时,双目的正确的晶状体聚焦深度始终固定在屏幕上,而双目辐辏则会聚在由视差定义的目标距离上,可能位于屏幕前方,也可能位于屏幕后方,由于聚焦深度与辐辏深度的不匹配造成辐辏调节冲突。
基于上述内容,下面对本申请的显示模组所适用的可能的场景进行说明。
本申请中,显示模组可应用于近眼显示(near eye display,NED)设备,例如VR眼镜,或者VR头盔等。例如,用户佩戴NED设备(请参阅图2a)进行游戏、观看电影(或电视剧)、参加虚拟会议、参加视频教育、或视频购物等。
不同预设场景类型中,虚像的目标位置可能是不同的。如图2b所示,预设场景类型1对应的虚像的目标位置为位置1,预设场景类型2对应的虚像的目标位置为位置2,预设场景类型3对应的虚像的目标位置为位置3。当虚像处于目标位置时,人眼的聚焦深度与双目的辐辏深度基本一致,从而有助于减小辐辏调节冲突。也就是说,为了尽可能减小辐辏调节冲突,需要调节虚像所处的位置。也可以理解为,在不同预设场景下,即为多焦面显示。
鉴于此,本申请提供一种显示模组,该显示模组可对虚像的位置进行精确调节,以使得虚像形成于目标位置,从而有助于减小辐辏调节冲突。
下面结合附图3至附图19,对本申请提出的显示组件进行具体阐述。
如图3所示,为本申请提供的一种显示模组的结构示意图。该显示模组可包括显示组件301、光学成像组件302和虚像位置调节组件303;显示组件301用于显示图像;所述光学成像组件302用于将所述图像形成虚像;虚像位置调节组件303用于调节所述光学成像组件302和显示组件301中的至少一个,将所述虚像调节至目标位置,所述虚像的目标位置与所述图像所属的预设场景类型有关。
其中,图像所属的预设场景类型可以是图像的内容所属的预设场景类型。也就是说, 可以针对图像内容所属的预设场景类型设置虚像的位置。
或者,图像所属的预设场景类型也可以是图像对应的对象所属的预设场景类型,图像对应的应用可以理解为该图像为进入应用所显示的图像。进一步,针对同一对象的不同图像内容,也可以设置不同的虚像位置。也可以理解为,选择某一对象后,显示该对象的图像内容后,可进一步确定图像内容所属的预设场景类型。例如,选择游戏应用后,游戏应用中又设置了不同图像内容所属的预设场景类型,因此,还可进一步确定进入该游戏应用后,图像内容所属的预设场景类型。
基于该方案,通过虚像位置调节组件调节光学成像组件和/或显示组件,从而可精确的将不同预设场景类型下的虚像调节至对应的目标位置,可以使得用户清晰的观看到显示模组显示的图像。基于不同预设场景类型自动调节虚像所处的位置(即显示模组可以进行自适应焦面显示),从而有助于减小辐辏调节冲突。
在一种可能的实现方式中,当所述图像属于不同的预设场景类型时,所述显示模组呈现虚像的目标位置不同。应理解,图像属于不同的预设场景类型时,显示模组呈现虚像的目标位置也可以相同。
示例性地,预设场景类型例如办公场景类型、阅读场景类型、会议场景类型、交互式游戏场景类型或视频场景类型。进一步,可选地,预设场景类型为所述办公场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.1~10]屈光度D;预设场景类型为所述阅读场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.5~10]屈光度D;预设场景类型为所述会议场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离为0.1~7.1D;预设场景类型为所述交互式游戏场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离为0.5~7.5屈光度D;预设场景类型为所述视频场景类型时,所述头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离为0.1~7屈光度D。
此处,预设场景类型可以根据一定的规则预先划分出来的。例如,可以将某些图像的内容按规则分为一类预设场景;或者,也可以将某些对象(例如应用)按规则分为一类预设场景,例如
和
等应用都可以归类为视频类场景类型,
和
等应用都可以归类为购物场景类型。
需要说明的是,虚像处于目标位置时,处于目标位置的虚像的焦距深度与辐辏深度的差值的绝对值小于阈值。也可以理解为,虚像的目标位置与人眼之间的距离与人眼的双目会聚深度的差值的绝对值小于阈值。进一步,可选地,所述阈值范围为[0屈光度D,1屈光度D]。应理解,阈值可以根据人眼对VAC的容忍度确定。
下面对图3所示的各个功能组件和结构分别进行介绍说明,以给出示例性的具体实现方案。为方便说明,下文中的显示组件、光学成像组件、虚像位置调节组件均未加标识。
一、显示组件
在一种可能的实现方式中,显示组件作为图像源,可为显示模组提供显示内容,例如可提供显示3D内容、交互画面等。也就是说,显示组件可以对入射的光进行空间强度调制以产生携带有图像信息的光。该携带有图像信息的光可经光学成像组件传播(例如折射)至人眼成像;人眼看到折射后的光,会感觉光来自其反向延长线交点的位置,反向延长线 相交而成的像即为虚像。
示例性地,显示组件可以是液晶显示器(liquid crystal display,LCD),或者有机发光二极管(organic light emitting diode,OLED),或者微型发光二极管(micro light emitting diode,micro-LED),或者有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),或者柔性发光二极管(flex light-emitting diode,FLED),或者量子点发光二极管(quantum dot light emitting diodes,QLED)。OLED具有较高的发光效率以及较高的对比度;mini-LED显示屏具有较高的发光亮度,可应用于需要较强的发光亮度的场景。
示例性地,显示组件也可以是反射式显示屏。例如,硅基液晶(liquid crystal on silicon,LCOS)显示屏,或者基于字微镜器件(digital micro-mirror device,DMD)的反射式显示屏。其中,LCOS和DMD由于是反射式结构,其分辨率或开口率较高。
在一种可能的实现方式中,显示组件还可用于显示第一界面,第一界面可包括多个对象。进一步,可选地,对象包括但不限于应用。
图4a为本申请示例性的示出的一种第一界面的示意图。该第一界面400显示的多个对象以视频、会议、网页和游戏四个应用的图标为例。该第一界面也可为安卓系统桌面启动器(Launcher)界面。
进一步,可选地,该第一界面400还可包括用于选择对象的光标,可参见上述图4a。用户可通过操作光标来选择对象。例如,可以将光标移动至所要选择的第一对象,单击(或双击)触摸手柄或其他独立按键,从而可选择出第一对象。相应地,显示模组检测到用户的手指(或触控笔等)选择的第一对象后,响应于选择第一对象的操作,可触发虚像位置调节组件对虚像的位置进行调节。
需要说明的是,还可以通过其它方式选择对象。示例的,可以响应于用户的快捷手势操作(例如,三指上滑、连续两次指关节敲击显示屏等)、或者语音指令等操作,本申请对此不做限定。
在一种可能的实现方式中,显示模组检测到选择第一对象后,需进一步获取第一对象对应的目标位置。如下示例性地的示出了三种确定目标位置的实现方式。需要说明的是,这三种实现方式可以是由控制组件执行的。
实现方式一,根据获取的预设场景类型与虚像的位置的对应关系,确定第一对象所属的预设场景类型对应的目标位置。
基于该实现方式一,不同的预设场景均有适合的虚像的位置(即目标位置),虚像处于目标位置时,人眼可以清晰的看到显示模组所显示的图像。
在一种可能的实现方式中,可获取M个预设场景类型及M个预设场景类型分别对应的虚像的位置;统计M个预设场景类型与M个预设场景类型分别对应的虚像的位置的分布关系;根据分布关系,确定预设场景类型与虚像的位置的对应关系,M为大于1的整数。进一步,可选地,分布关系可以服从高斯分布,虚像的目标位置可以是高斯分布的期望值。
在另一种可能的实现方式中,可获取M个预设场景类型及M个预设场景类型分别对应的虚像的位置;将M个预设场景类型与M个预设场景类型分别对应的虚像的位置的输入人工智能算法,从而可得到预设场景类型与虚像的位置的对应关系。
进一步,可选地,可接收用户输入的M个预设场景对应的虚像的位置。或者,获取M个预设场景中的图像的双目视差,分别根据所述M个预设场景中的图像的双目视差,确定 所述M个预设场景对应的虚像的位置。例如,根据两幅图像的内容中相同元素的位置,计算出图像的深度,从而确定出虚像的位置。
示例性地,可以由开发者或显示模组厂商获取预设场景类型与虚像的位置的对应关系。也可以理解为,预设场景类型与虚像的位置的对应关系可以是开发者或显示模组厂商设置的。
基于上述实现方式一,可将获取的预设场景类型与虚像的位置的对应关系预先存储于显示模组或显示模组之外的存储器。应理解,对应关系可以以表格的形式存储。表1示例性地的示出了一种预设场景类型与虚像的位置的对应关系。表1中虚像的目标距离范围是指头戴式显示设备呈现虚像的位置与所述光学成像组件之间的距离范围,虚像的最佳目标距离是指头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的最佳距离。
表1 预设场景类型与虚像的位置的对应关系
预设场景类型 | 虚像的目标距离范围 | 虚像的最佳目标距离 |
办公 | [0.1~10]屈光度D | 1D(即1m) |
阅读 | [0.5~10]屈光度D | 2D(即0.5m) |
会议 | [0.1~7.1]屈光度D | 0.583D(即1.714m) |
交互游戏 | [0.5~7.5]屈光度D | 1D(即1m) |
视频/音乐/直播 | [0.1~7]屈光度D | 0.5D(即2m) |
如表1所示,办公预设场景类型的虚像目标距离范围为[0.1~10]屈光度D,最佳目标距离为1D(即1m);阅读预设场景类型的虚像目标距离范围为[0.5~10]屈光度D,最佳目标距离为2D(即1.714m);会议预设场景类型的虚像目标距离范围为[0.1~7.1]屈光度D,最佳目标距离为0.583D(即0.5m);交互游戏预设场景类型的虚像目标距离范围为[0.5~7.5]屈光度D,最佳目标距离为1D(即1m);视频/音乐/直播等预设场景类型的虚像目标距离范围为[0.1~7]屈光度D,最佳目标距离为0.5D(即2m)。也可以理解为,不同的预设场景类型都有适合的虚像的位置范围。
进一步,可选地,当所述图像所属的预设场景类型为所述会议场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.1~3.0]屈光度D;或者,当所述图像所属的预设场景类型为所述交互式游戏场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[3.0~5.0]屈光度D;或者,当所述图像所属的预设场景类型为所述视频场景类型时,所述头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离为(5.0~7]屈光度D。
在一种可能的实现方式中,视频场景类型对应的虚像的目标位置与光学成像组件之间的距离大于会议场景类型对应的虚像的目标位置与光学成像组件之间的距离;和/或,会议场景类型对应的虚像的目标位置与光学成像组件之间的距离大于阅读场景类型对应的虚像的目标位置与光学成像组件之间的距离。
在另一种可能的实现方式中,视频场景类型对应的虚像的目标位置与光学成像组件之间的距离大于会议场景类型对应的虚像的目标位置与光学成像组件之间的距离;和/或,会议场景类型对应的虚像的目标位置与光学成像组件之间的距离大于办公场景类型对应的虚像的目标位置与光学成像组件之间的距离;和/或,办公场景类型对应的虚像的目标位置与光学成像组件之间的距离大于阅读场景类型对应的虚像的目标位置与光学成像组件之间的距离。
需要说明的是,办公场景类型对应的虚像的目标位置与光学成像组件之间的距离与交互游戏场景类型对应的虚像的目标位置与光学成像组件之间的距离较接近。
实现方式二,用户自定义第一对象对应的虚像的目标位置。
在一种可能的实现方式中,用户可以通过语音或者虚拟按键等交互方式输入自定义的虚像的目标位置。也可以理解为,用户在选择第一对象后,还需要输入自定义的该第一对象对应的虚像的目标位置。结合上述图4a,用户在选择第一对象后,可进入第一对象的设置界面500,请参见图4b,该设置界面500可用于对第一对象的特性(例如虚像的目标位置、最低帧率、最低分辨率等)进行设置。用户可在该设置界面500中选择“最佳深度(即虚像的目标位置)”特性,然后弹出输入虚像的目标位置的对话框,用户可在弹出的对话框中通过虚拟键盘或者语音等方式输入自定义的虚像的目标位置,并确认。或者,用户可在该设置界面500中选择“最佳深度(即虚像的目标位置)”特性,进入第二界面600(请参阅图4c),用户可在第二界面600中通过虚拟键盘或者语音等方式输入自定义的虚像的目标位置,并确认。
实现方式三,根据眼动追踪组件确定第一对象对应的虚像的目标位置。
在一种可能的实现方式中,所述显示模组还可包括眼动追踪组件;所述眼动追踪组件用于确定双目注视所述图像的会聚深度;所述虚像位置调节组件用于根据所述会聚深度,驱动所述成像光学组件和/或所述显示组件移动,将所述虚像调节至所述目标位置。
示例性地,眼动追踪组件可用于确定双目注视选择第一对象后触发显示的图像的会聚深度,可将该会聚深度的位置确定为虚像的目标位置。
在一种可能的实现方式中,显示组件还可用于显示第三界面700,第三界面700可用于输入双目视力参数。图4d为本申请示例性的示出了一种第三界面的示意图。该第三界面700以双目视力参数为近视度数为例。该第三界面700可以显示左眼近视度数的选项框和右眼近视度数的选项框,可通过下拉左眼近视度数的选项框和右眼近视度数的选项框选择双目视力参数。
参见图4e,为本申请示例性的示出了另一种第三界面的示意图。该第三界面700可以显示虚拟键盘、左眼近视度数的输入框和右眼近视度数的输入框。用户可通过虚拟键盘在左眼视力框中输入左眼视力参数,通过虚拟键盘在右眼视力框中输入右眼视力参数。
相应地,显示模组检测到双目视力参数后,可触发虚像位置调节组件对虚像的位置进行相应的调节。示例性地,显示模组可根据视力参数与虚像的位置的对应关系,相应的确定虚像的位置。应理解,可以在存储器中预先存储双目视力参数与虚像的位置的对应关系,例如,双目300对应一个虚像的位置,双目350度对应另一个虚像的位置,左眼300度右眼400度对应又一个位置。
需要说明的是,显示组件在显示第三界面700之前,可先显示第四界面800,该第四界面800可包括视力参数类型选择框,如图4f所示,其中,视力参数类型包括但不限于近视度数、散光度数、老花度数、或远视度数。用户可选择自身需要的视力参数类型。另外,视力参数通常是首次使用该显示模组时需要设置的。
需要说明的是,在显示组件显示图像之前,还需要对画面进行渲染操作,例如可以是控制组件进行渲染的,关于控制组件可参见下述相关介绍,此处不再重复赘述。
二、光学成像组件
在一种可能的实现方式中,光学成像组件可用于对显示组件显示的图像在虚拟空间形 成虚像,并将显示组件显示的图像投射至人眼。
如下,示例性示出十种光学成像组件的结构。
结构一,光学成像组件为第一透镜。
在一种可能的实现方式中,第一透镜可以是单片的球面透镜或非球面透镜,也可以是多片球面或非球面透镜的组件合。通过多片球面或非球面透镜的组件合可以提高系统的成像质量,降低系统的像差。其中,球面透镜和非球面透镜可以是菲涅尔透镜,菲涅尔透镜可以降低模组件的体积以及质量。
进一步,可选地,球面透镜或非球面透镜的材质可以是玻璃或者树脂,树脂材料可以减轻模组件的质量,玻璃材料有较高的成像质量。
基于该结构一,第一透镜可通过卡环固定。参阅图5,为本申请提供的一种卡环固定第一透镜的示意图。卡环包括至少一个开口,第一透镜的一端从卡环的开口处嵌入卡环内。其中,卡环第一面为平面、且能够对接收到的激光束进行散射,即卡环的第一面为平面且表面具有一定的散射特性。卡环的第一面与三角测距激光雷达发射的激光束的方向相对(可参见下述图17a)。如此,有助于提高光束的利用率。
结构二,光学成像组件包括折叠光路光学组件。
如图6a所示,为本申请提供的一种光学成像组件的结构示意图。该光学成像组件沿第一半透半反镜的主光轴的方向依次包括偏振片、第一1/4波片、第一半透半反镜、第二1/4波片和反射偏振片。基于图6a的光学成像组件,光路可参见图6b。偏振片用于将来自显示组件的形成图像的光的偏振态的过滤为同一偏振态(即称为第一线偏光),例如过滤为水平线偏振光或竖直线偏振光,其可以是吸收式的或者反射式的。第一线偏光例如可以是P偏振光或S偏振光。1/4波片用于将来自偏振片的第一线偏光转化为第一圆偏振光,并将第一圆偏振光传输至第一半透半反镜。第一半透半反透镜用于将来自第一1/4波片的第一圆偏光的透射至第二1/4波片;第二1/4波片用于将接收到的第一圆偏光转换为第二线偏光,第二线偏光的偏振方向与第一线偏光的偏振方向相同;反射偏振片用于将来自第二1/4波片的第二线偏光反射至第二1/4波片;第二1/4波片还用于将接收到的第二线偏光转换为第二圆偏光,第二圆偏光的旋转方向与第一圆偏光的旋转方向相同,图6b均以左旋圆偏光为例;第一半透半反透镜还用于将来自第二1/4波片的第二圆偏光反射为第三圆偏光,第三圆偏光的旋转方向与第二圆偏光的旋转方向相反;第二1/4波片还用于将来自半透半反透镜的第三圆偏光转换为第三线偏光;反射偏振片还用于将第三线偏光透射至人眼形成图像。
进一步,可选地,该折叠光路光学组件中还可包括一个或多个像差补偿透镜。这些像差补偿透镜可用于像差补偿。例如,可以用于补偿球面或者非球面透镜成像过程中的球差、慧差、像散、畸变及色差等。这些像差补偿透镜可以位于折叠光路的任意位置。例如,这些像差补偿透镜可以位于第一半透半反镜和反射偏振片之间。图6a以包括像差补偿透镜1和像差补偿透镜2为例,像差补偿透镜1位于偏振片和显示组件之间,像差补偿透镜2位于反射偏振片与人眼之间。其中,像差补偿透镜可以是单片的球面透镜或非球面透镜,也可以是多片球面或非球面透镜的组合,其中,多片球面或非球面透镜的组合可以提高系统的成像质量,降低系统的像差。像差补偿透镜的材料可为光学树脂,像差补偿透镜1和像差补偿透镜2的材料可以相同,也可以不相同。
基于该结构二,光学成像组件可以固定于镜筒中,请参阅图6c。应理解,上述结构一 中的光学成像组件也可以固定于该镜筒中。
通过结构二的光学成像组件,由于可以对光路进行折叠,因此,有助于缩短成像光路,从而有助于减小光学成像组件的体积,进而有助于减小包括该光学成像组件的显示模组的体积。
如图6d所示,为本申请提供的一种半透半反镜的结构示意图。该半透半反镜的透过面的最近似球面半径为r1,r1为负数表示凹面,r1为正数表示凸面;半透半反镜的半透半反面的最近似球面半径为r2,r2为负数表示凸面,r2为正数表示凹面,该半透半反镜的材料的折射率为n。
结构三,光学成像组件包括第二半透半反镜和第二透镜。
基于该结构三,显示组件可包括第一显示屏和第二显示屏,第一显示屏的分辨率高于第二显示屏的分辨率。
如图7所示,为本申请提供的另一种光学成像组件的结构示意图。该光学成像组件包括第二半透半反镜和第二透镜。第一显示屏用于显示图像的中央区域;第二显示屏用于显示图像的边缘区域;第二半透半反镜用于将来自第一显示屏的图像的中央区域反射至第二透镜,并将来自第二显示屏的图像的边缘区域透射至第二透镜;第二透镜用于将来自第二半透半反镜的图像的中央区域和图像的边缘区域合为图像,投射至人眼,并在目标位置形成完整虚像。
通过该光学成像组件可以模拟人眼的真实情况,利用较少的像素数实现了近似人眼的真实感受。应理解,人眼在中央凹区域(~3°)有1’的高分辨率,周边视野的分辨率会下降到10’左右。
进一步,可选地,该光学成像组件还可包括第三透镜和第四透镜。其中,第三透镜用于将来自第一显示屏的图像的中央区域进行会聚,并将会聚后的图像的中央区域传播至第二半透半反镜;第四透镜用于将来自第一显示屏的图像的边缘区域进行会聚,并将会聚后的图像的边缘区域传播至第二半透半反镜。
基于该结构三,光学成像组件可以固定于镜筒中,通过凸轮或者丝杠等部件与虚像位置调节组件连接。
结构四,光学成像组件包括多通道透镜。
如图8所示,为本申请提供的又一种光学成像组件的结构示意图。该光学成像组件为多通道透镜。多通道透镜由M对反射面设置自由曲面透镜依次连接形成,M为大于1的整数。图8是以包括两个通道(即通道1和通过2)示例的。多通道透镜中的每个通道可对应一个较小的视场角(field of view,FOV)。也就是说,多通道透镜可以将大FOV分解为多个较小的FOV的拼合,一个较小FOV即对应一个通道。
由于大FOV的边缘FOV成像质量较难控制,通常需要多个透镜的组合来矫正边缘的FOV的像差,通过该光学成像组件,由于多通道透镜可以将较大的FOV分解为多个较小的FOV,从而有助于提高边缘FOV的成像质量,而且,所需的光学成像透镜的口径可以变小,且不需要用于矫正边缘的FOV的像差的透镜,因此,有助于减小光学成像组件的体积。
基于该结构四,光学成像组件可以固定于镜筒中,通过凸轮或者丝杠等部件与虚像位置调节组件向连接。
结构五,光学成像组件包括微透镜阵列(Micro lens array,MLA)。
如图9所示,为本申请提供的又一种光学成像组件的结构示意图。该光学成像组件以包括两个微透镜阵列。微透镜阵列中的每个微透镜可对应一个较小的FOV,每个较小的FOV可通过一个微透镜成像。也就是说,微透镜阵列可将大FOV分解为多个较小的FOV的拼合。
由于大FOV的边缘FOV成像质量较难控制,通常需要多个透镜的组合来矫正边缘的FOV的像差,通过该光学成像组件,由于微透镜阵列可将较大FOV分解为多个较小的FOV,从而有助于提高边缘FOV的成像质量,而且,通过该光学成像组件,所需的光学成像透镜的口径可以变小,且不需要用于矫正边缘的FOV的像差的透镜,因此,有助于减小光学成像组件的体积。
在一种可能的实现方式中,该显示模组可包括两个微透镜阵列,每个微透镜阵列对应一个显示屏,可参阅上述图9。
基于该结构五,光学成像组件可以固定于镜筒中,通过凸轮或者丝杠等部件与虚像位置调节组件连接。
结构六、光学成像组件包括阿尔瓦雷斯镜片(Alvarez lenses)。
如图10所示,为本申请提供的又一种光学成像组件的结构示意图。该光学成像组件包括阿尔瓦雷斯镜片。该阿尔瓦雷斯镜片包括两个或两个以上的折射透镜(或称为自由曲面透镜)。其中,每两个折射透镜为一组,可称为折射透镜组。图10以阿尔瓦雷斯镜片包括折射透镜1和折射透镜2为例进行示例。
结构七、成像透镜组件包括莫里透镜(Morie lens)。
如图11所示,为本申请提供的又一种光学成像组件的结构示意图。该光学成像组件包括莫里透镜,该莫里透镜可包括两个或两个以上的衍射光学元件级联。图11中的莫里透镜以衍射光学元件1和衍射光学元件2级联为例。
结构八、光学成像组件为液晶透镜。
如图12a所示,为本申请提供的一种液晶透镜的结构示意图。该液晶透镜为普通的液晶透镜,可以通过改变外加的电场的形式来改变液晶分子长轴的方向而产生光学各向异性和介质各向异性,从而可获得可调谐的折射率,即可实现改变液晶透镜的等效相位,从而可改变液晶透镜的焦距。其中,液晶透镜的等效相位可以是通过施加电压信号或电流信号实现普通透镜的相位,也可以是菲涅尔透镜的相位。
参阅图12b,为本申请提供的另一种液晶透镜的结构示意图。该液晶透镜也可以是反射式的硅基液晶(liquid crystal on silicon,LCOS),可以通过改变外加电压信号或电流信号改变液晶分子长轴的方向,从而可改变光经过其的折射率,从而可改变液晶透镜的焦距。
参阅图12c,为本申请提供的又一种液晶透镜的结构示意图。液晶透镜也可以是液晶几何相位(pancharatnam berry,PB)透镜。是基于几何相位产生的透镜功能。可通过改变液晶PB透镜中的液晶分子长轴的方向或者射入液晶PB透镜的入射光的偏振态,以改变液晶PB透镜的变焦。
进一步,可选地,液晶PB透镜可以分为主动式和被动式两种类型。主动式液晶PB透镜,主要由液晶态的液晶材料制作,液晶态的液晶材料具有流动性,可以通过施加电压信号或电流信号来改变液晶分子长轴的方向以实现变焦。
被动式液晶PB透镜有较好的热稳定性以及较高的分辨率。被动式液晶PB透镜主要由液晶高分子材料组成,可以通过曝光等方式聚合后形成固态的聚合物(Polymer),可以通 过改变入射光的偏振态来实现变焦。例如,平行光入射,左旋圆偏振光的焦距为1m,右旋偏振光的焦距为-1m,可参见图13a。改变入射光的偏振态可以利用电控半波片或者电控的扭曲向列相液晶(twisted nematic liquid crystal,TNLC)实现,可参见图13b。由于液晶PB透镜的变焦能力是离散的,因此,使用液晶PB透镜可实现离散的虚像调节,可以通过多片液晶PB透镜堆叠的方式(如图13b所示)实现近似连续的虚像调节。示例性地,若虚像的调节精度为0.25D,虚像的调节范围(即调节能力)为0~4D,则需要4D/0.25D=16个虚像的位置,可通过4个被动式液晶PB透镜和4个TNLC组合,其中,TNLC用于进行偏振态的调节,一个TNLC可以调节出两个偏振态(如图13a)。
结构九、光学成像组件为液体透镜。
如图14a所示,为本申请提供的一种液体透镜的结构示意图。该液体透镜可以通过改变施加的电压信号或电流信号来改变薄膜材料的形状,同时液体注入或者流出液体透镜,从而改变液体透镜的焦距。
如图14b所示,为本申请提供的另一种液体透镜的结构示意图。该液体透镜可以利用电浸润的原理,通过改变施加电压信号或电流信号来改变互不相融的两种液体之间的交界面的面型,从而改变液体透镜的焦距。
结构十、光学成像组件为可变形反射镜。
如图15所示,为本申请提供的一种可变形反射镜的结构示意图。该可变形反射镜可以是分立或者连续的微反射面,利用静电力或者电磁力驱动微反射面发生形变或者位移,通过调控分立电极的电压信号或电流信号,实现不同的反射面型,从而可实现变焦。需要说明的是,反射面可以是凹面反射镜,凹面反射镜的曲率可以通过电压信号或电流信号调,不同曲率的凹面反射镜的焦距不同。
除了上述常见光学结构外,用户还可以采用其他更加偏计算型的光学结构,如计算显示数字变焦和全息显示等方法实现虚像位置的调节,本申请对此不做限定。
需要说明的是,对于散光的用户,矫正散光需要使用柱透镜和旋转驱动组件,旋转驱动组件用于改变柱透镜的光轴。柱透镜可以位于上述光学成像组件与显示组件之间,或者位于光学成像组件远离显示组件的一侧,即位于光学组件与人眼之间。
通过上述各种结构的光学成像组件,可以实现将图像在目标位置形成虚像。形成虚像的光路可参见上述图2b的光路。
三、虚像位置调节组件
在一种可能的实现方式中,虚像位置调节组件可用于调节光学成像组件和/或显示组件,将所述虚像调节至目标位置。如下,分两种情形分别介绍。
情形1,虚像位置调节组件通过机械式调节方式调节光学成像组件和/或显示组件。
在一种可能的实现方式中,虚像位置调节组件可以通过驱动光学成像组件和/或显示组件移动,将虚像调节至目标位置。具体地,虚像位置调节组件可用于移动显示组件、且光学成像组件不动,可参见图16a;或者,虚像位置调节组件可用于移动光学成像组件,且显示组件不动,可参见图16b;或者,虚像位置调节组件可用于移动显示组件、且移动光学成像组件,可参见图16c。应理解,图16a、图16b和图16c中的光学成像组件以透镜示例的。
基于该情形1,机械调节方式调节光学成像组件和/或显示组件又可分为自动调节模式和手动调节模式。
情形1.1,自动调节模式。
基于该情形1.1,在一种可能的实现方式中,虚像位置调节组件可包括驱动组件;所述驱动组件用于驱动光学成像组件和/或显示组件移动,将所述虚像调节至所述目标位置。
示例性地,驱动组件可根据接收到的显示组件和/或所述光学成像组件的待移动的距离,驱动所述显示组件和/或所述光学成像组件移动,将所述虚像调节至所述目标位置。
在另一种可能的实现方式中,虚像位置调节组件可包括驱动组件和位置传感组件。在一种可能的实现方式中,位置传感组件用于确定所述光学成像组件和/或显示组件的位置。进一步,位置传感组件可将确定出的光学成像组件和/或显示组件的位置发送至控制组件。相应地,控制组件可根据光学成像组件和/或显示组件的位置,确定所述显示组件与所述光学成像组件之间的第一距离,并根据所述第一距离确定所述光学成像组件和/或显示组件移动的待移动距离,并向驱动组件发送待移动的距离。示例性地,该待移动的距离可以携带在控制组件向驱动组件发送的控制指令中。或者,位置传感组件用于确定所述光学成像组件和/或显示组件之间的第一距离,并向控制组件发送第一距离。控制组件可根据所述第一距离和虚像的目标位置,确定所述光学成像组件和/或显示组件移动的待移动距离,并向驱动组件发送待移动的距离。示例性地,该待移动的距离可以携带在控制组件向驱动组件发送的控制指令中。
在一种可能的实现方式中,所述驱动组件用于根据所述待移动的距离,驱动所述光学成像组件和/或显示组件移动,将所述虚像调节至所述目标位置。
进一步,可选地,驱动组件可以是马达和传动元件。马达可用于驱动传动元件转动;传动元件可用于在马达的作用下,带动显示组件和/或光学成像组件的移动。
在一种可能的实现方式中,马达从功能上可分为开环马达(open loop)、闭环马达(close loop motor)。开环和闭环是自动控制中的两个概念。开环是指输入马达的是电流信号,马达输出的是位移,没有反馈控制,所以称为开环。闭环马达可以通过位置的反馈,利用闭环系统来精确调节光学成像组件和/或显示组件。通常,闭环马达一般来会在光学成像组件的载体的位置加装一个位置传感器,以霍尔传感器为例,通过霍尔芯片感应四周磁铁的磁通量,进而推算出光学成像组件的实际位置。导入霍尔芯片后,对马达的控制,就可以由原先的输入电流信号输出位移,变为输入位移输出位移。马达可以根据霍尔芯片的反馈,持续调节马达的位置。
示例性地,马达例如可为步进马达、直流马达、静音马达、伺服马达(或称为伺服电机)或音圈马达等。其中,伺服马达为闭环马达。步进马达、直流马达、静音马达、和音圈马达通常为开环马达。步进马达和静音马达可以提高驱动精度。
静音马达例如是超声马达(ultra-sonic motor,USM),超声波马达是通过超声波信号驱动压电材料,使其发生形变,然后压电材料的形变通过摩擦和机械运动传递到转子或者旋转环,从而产生转动运动。超声波马达有两种,一种是环形USM,可以套在镜筒外不需要减速传动齿轮直接驱动,但是会限制镜筒口径。另一种是微型USM,和普通步进马达一样,需要加传动元件来驱动固定光学成像组件的结构(例如镜筒或卡环),但是体积更小,不会限制镜筒口径。采用USM马达可以降低噪声,而且速度快,扭矩大,工作温度范围宽等。
音圈马达也称为音圈电机(voice coil motor,VCM),主要工作原理是在一个永久磁场内,通过改变音圈马达内线圈的直流电流信号大小,将电流信号转化为机械力,来控制音 圈马达中的弹簧片的拉伸位置,从而带动与其固定的物体的运动。音圈马达本身不知道什么时候开始运动,也不知道运动到哪里结束,需要驱动来处理和控制。通常与音圈马达匹配的有一个驱动芯片(Driver IC)。驱动芯片接收来自控制组件发来的控制指令(例如下文中的第一控制指令或第二控制指令或第三控制指令),从而输出电流信号给音圈马达,从而带动音圈马达运动。利用位置传感器的音圈马达是知道线圈所处的位置的。
示例性地,传动元件例如可以是丝杠、螺杆、齿轮或凸轮筒等。丝杠例如滚珠丝杠,可将旋转运动转换为直线运动,或者将直线运动转换为旋转运动。其中,丝杠具有高精度、可逆性和高效性。
在一种可能的实现方式中,位置传感组件可以是三角测距激光雷达(可参见上述三角测距激光雷达的介绍),或者位置编码器。位置编码器例如可以是光栅尺、磁编码器。位置编码器可以把角位移转换成电信号,例如角度编码器;或者也可以把直线位移转换为电信号。
基于上述内容,下面结合具体的硬件结构,给出了上述显示模组的具体实现方式,以便于进一步理解上述虚像位置调节组件调节光学成像组件和/或所述显示组件的过程。
为了便于方案的说明,下面以虚像位置调节组件用于移动光学成像组件为例,光学成像组件以上述结构二为例,进一步,以移动上述结构二中的第一半透半反镜为例进行说明。
当位置传感组件为三角激光测距雷达时,如图17a所示,为本申请提供的一种显示模组的结构示意图。其中,三角测距激光雷达可与显示组件固定,例如,三角测距激光雷达可固定于显示组件所在的基板;第一半透半反镜可通过卡环固定。三角测距激光雷达可用于向卡环的第一面发射激光束,卡环的第一面可用于反射该激光束,三角测距激光雷达可根据接收到的反射光束和发射的激光束,确定第一半透半反镜与显示组件之间的第一距离。具体测量原理可参见前述图1b的介绍,此处不再重复赘述。
进一步,可选地,三角测距激光雷达可向控制组件发送位置信息,该位置信息包括三角测距激光雷达测量的显示组件与第一半透半反镜之间的第一距离。相应地,控制组件可用于接收来自三角测距激光雷达的位置信息,该位置信息用于指示显示组件与第一半透半反镜之间的第一距离。控制组件可根据该位置信息和虚像的目标位置确定第一半透半反镜的待移动距离,并根据该待移动距离生成第一控制指令,并向驱动组件发送第一控制指令,第一控制指令用于指示驱动组件驱动卡环进行移动,从而带动第一半透半反镜沿主光轴的方向移动。进一步,可选地,控制组件可用于根据第一距离与虚像的位置的对应关系,确定第一半透半反镜的待移动距离。
示例性地,控制组件可用于根据位置信息携带的显示组件与光学成像组件之间的距离A(即第一距离)、以及第一距离与虚像的位置的对应关系(如表3),确定虚像处于目标位置时,显示组件与光学成像组件之间的距离B,并将距离B与距离A的差值的绝对值确定为第一半透半反镜的待移动距离S,并根据该待移动距离S生成第一控制指令。需要说明的是,虚像的位置与第一距离之间的对应关系可以是预先存储于控制组件;或者也可以是预先存储于存储器中,控制组件接收到第一距离后可从存储器中读取。
表3 虚像的位置与第一距离的对应关系
虚像的目标位置 | 第一距离 |
1.3m | 5mm |
在一种可能的实现方式中,第一控制指令中可包括第一半透半反镜的待移动距离S, 驱动组件可用于根据接收到的第一控制指令,驱动卡环移动距离S,卡环可带动第一半透半反镜移动距离S,以实现将虚像调节至目标位置。
为了提高调节虚像的位置的精度,在光学成像组件移动距离S后,位置传感组件可再次测量光学成像组件与显示组件之间的实际距离Y。也就是说,位置传感组件可实时测量光学成像组件与显示组件的位置,从而确定虚像是否形成在目标位置。进一步,位置传感组件可用于将该实际距离Y发送至控制组件。控制组件可用于根据理论距离X和实际距离Y,确定光学成像透镜组件是否还需要继续调节。应理解,第一半透半反镜移动距离S后,光学成像组件与显示组件之间的理论距离为X,但是由于驱动组件的驱动误差(可参见下述相关描述),光学成像组件与显示组件之间的实际距离Y可能与X不同。
进一步,可选地,如果
位置传感组件可用于反馈给控制组件第一指示信号,第一指示信号用于指示不需要继续调整;如果
位置传感组件可用于反馈给控制组件第三控制指令,该第三控制指令可包括需要再次移动的距离|Y-X|。相应地,控制组件可用于根据接收到的第三控制指令,并向驱动组件发送第三控制指令。相应地,驱动组件可用于根据接收到的第三控制指令,驱动第一半透半反镜再移动|Y-X|,如此循环,直至
为止。
当位置传感组件为位置编码器时,如图17b所示,为本申请提供的又一种显示模组的结构示意图。其中,位置编码器可固定于显示组件所在的基板,光学成像组件可通过镜筒固定,镜筒与滑动组件固定,滑动组件移动时可带动第一半透半反镜移动。位置编码器可通过测量滑动组件的位置,确定出第一半透半反镜的位置。其中,滑动组件可以是滑块。
进一步,可选地,位置编码器可向控制组件发送位置信息,该位置信息包括位置编码器测量的第一半透半反镜的位置。相应地,控制组件可用于接收来自位置编码器的位置信息,该位置信息用于指示第一半透半反镜的位置,并根据该位置信息确定显示组件与第一半透半反镜之间的第一距离,根据第一距离和虚像的目标位置确定第一半透半反镜的待移动距离,并根据该待移动距离生成第一控制指令,并向驱动组件发送第一控制指令,第一控制指令用于指示驱动组件驱动传动元件转动,以带动滑动组件移动,从而带动第一半透半反镜移动。进一步,可选地,控制组件可用于根据第一距离与虚像的位置的对应关系,确定第一半透半反镜的待移动距离。应理解,第一半透半反镜是沿第一半透半反镜的主光轴的方向移动。
在一种可能的实现方式中,位置传感组件驱动组件可以集成在一起,可参阅图17c。
需要说明的是,当光学成像组件移动距离Δd,则该光学成像组件对显示组件显示的图像形成的虚像可移动距离Δz,可参见图17d。需要说明的是,Δz是关于Δd的函数,即Δz=f(Δd)。应理解,图17d中以半透半反镜表示光学成像组件。
在一种可能的实现方式中,当光学成像组件为上述结构六时,结合上述图10,位置传感组件可用于在垂直于折射透镜的主光轴的方向上(即图10所示的水平方向),确定两个折射光学元件之间的第一距离(如折射透镜1和折射透镜2的中心之间的距离)。进一步,可选地,位置传感组件可向控制组件发送位置信息,该位置信息包括位置传感组件测量的 两个折射透镜之间的第一距离。相应地,控制组件可用于接收来自位置传感组件的位置信息,该位置信息用于指示两个折射透镜之间的第一距离。控制组件可根据该位置信息和虚像的目标位置确定这两个折射透镜的待移动距离,并根据该待移动距离生成第一控制指令,并向驱动组件发送第一控制指令,第一控制指令用于指示驱动组件驱动两个折射透镜中的至少一个沿垂直于折射透镜的光轴的方向移动。相应地,驱动组件可用于根据接收到的第一控制指令驱动两个折射透镜中的至少一个沿垂直于折射透镜的光轴的方向移动。
当光学成像组件为上述结构七时,位置传感组件用于分别确定衍射光学元件1和衍射光学元件2的相对角度。进一步,可选地,位置传感组件可向控制组件发送位置信息,该位置信息包括衍射光学元件1和衍射光学元件2的相对角度。相应地,控制组件可用于接收来自位置传感组件的位置信息,该位置信息用于指示衍射光学元件1和衍射光学元件2的相对角度。控制组件可根据位置信息和虚像的目标位置,确定两个衍射光学元件待转动角度,并根据待转动角度生成第一控制指令,并向驱动组件发送第一控制指令,第一控制指令用于指示驱动组件驱动衍射光学元件1和衍射光学元件2沿相反方向转动,或者用于指示驱动组件驱动衍射光学元件1和衍射光学元件2中的一个转动。相应地,驱动组件可用于根据接收到的第一控制指令,驱动衍射光学元件1和衍射光学元件2沿相反方向转动,或驱动衍射光学元件1和衍射光学元件2中的一个转动。进一步,可选地,控制组件可用于根据相对角度与虚像的位置的对应关系,确定待转动角度。
需要说明的是,虚像处于目标位置时,光学成像组件和/或显示组件的待移动距离或待转动角度可以是预先通过仿真得到的,并存储于显示模组的存储器或者显示模组可以调用的外部存储器。
在一种可能的实现方式中,虚像位置调节组件在调节光学成像组件和/或所述显示组件时,有一定的调节精度和调节范围。下面对虚像位置调节组件的调节精度和调节范围进行详细介绍。
在一种可能的实现方式中,所述虚像位置调节组件的调节范围是根据所述驱动组件的驱动量程和所述位置传感组件的测量量程确定的。进一步,可选地,驱动组件的驱动量程和位置传感组件的测量量程均与光学成像组件的光学参数相关。
在一种可能的实现方式中,所述虚像位置调节组件的调节精度是根据所述驱动组件的驱动误差和所述位置传感组件的位置测量误差确定的。进一步,可选地,驱动组件的驱动误差和位置传感组件的位置测量误差均与光学成像组件的光学参数相关。
结合上述图6d,为了保证虚像位置调节组件的调节精度不大于0.2D,驱动组件的驱动误差应满足:
进一步,可选地,为了保证虚像位置调 节组件的调节精度不小于0.1D,驱动组件的驱动误差应满足:
为了保证虚像的位置的调节精度不大于0.2D,位置传感组件的位置测量误差应满足:
进一步,可选地,为了保证虚像的位置的调节精度不小于0.1D,位置传感组件的位置测量误差应满足:
情形1.2,手动调节模式。
在一种可能的实现方式中,凸轮调焦机构可包括第一旋钮,第一旋钮用于选择第一对象所属的预设场景类型。图18a中预设场景类型以办公场景类型、会议场景类型、交互游戏场景类型和视屏场景类型四个为例,用户可通过选择第一旋钮,将指针旋转到某一个位置,指针所指示的对象即为选择的预设场景类型。
在一种可能的实现方式中,凸轮调焦机构还可包括导向柱(或导向筒),请参阅图18b。第一旋钮可通过机械结构与导向柱(或导向筒)的一端连接,导向柱(或导向筒)的另一端与光学成像组件连接。在通过第一旋钮旋转选择第一对象所属的预设场景类型,可带动导向柱(或导向筒)带动光学成像组件移动,已将虚像形成在目标位置。
进一步,可选地,凸轮调焦机构可还包括第二旋钮,第二旋钮用于调节视力参数,请参阅图18c,该第二旋钮上还可以标记相应的刻度表,该刻度表标识视力参数。例如,刻度表1~7表示度数100度~700度。用户通过旋转第二旋钮,可以将箭头指向某个位置,箭头所指的位置即为选择的视力参数。第二旋钮也可以称为视力度数调节旋钮。
通过凸轮调焦机构选择第一对象所属的预设场景类型、设置视力参数、驱动光学成像组件移动,采用的是手动调节机构,不需要驱动组件(如电机)驱动,因而有助于降低显示模组的成本。
情形2,非机械式的调焦方式。
基于该情形2,光学成像组件包括变焦透镜,例如可以是上述结构八至结构十所示例的变焦透镜。
在一种可能的实现方式中,所述虚像位置调节组件包括驱动组件,所述驱动组件用于改变施加于所述变焦透镜的电压信号或电流信号,改变所述变焦透镜的焦距,将所述虚像调节至所述目标位置。
在另一种可能的实现方式中,虚像位置调节组件可包括驱动组件和位置传感组件。所述位置传感组件可用于确定所述变焦透镜的第一焦距,所述第一焦距用于确定所述变焦透镜的待调节焦距。所述驱动组件可用于根据所述待调节焦距,改变施加于所述变焦透镜的电压信号或电流信号,将所述虚像调节至所述目标位置。应理解,变焦透镜的第一焦距即包括变焦透镜的当前焦距。
结合上述结构八中的图12a、图12b、图12c中的主动式液晶PB透镜、以及结构九所示例的光学成像组件,虚像位置调节组件可通过改变施加于变焦透镜的电压信号或电流信号,改变焦透镜的焦距,从而可实现将虚像调节至目标位置。应理解,变焦透镜的焦距和电压信号或电流信号之间的关系可以是控制组件确定的。
在另一种可能的实现方式中,虚像位置调节组件可以是电控半波片或者TNLC。结合 上述结构八中的图12c中的被动式液晶PB透镜的光学成像组件,电控半波片或者TNLC可通过改变入射光的偏振态,改变焦透镜的焦距,从而可实现将虚像调节至目标位置。应理解,变焦透镜的焦距和入射光的偏振态之间的关系可以是控制组件确定的。
在又一种可能的实现方式中,虚像位置调节组件可以包括驱动组件和位置传感组件。所述驱动组件为一组可产生特定电压信号或电流信号的电路板,所述位置传感组件为另一组电路板可用于测量施加在光学成像组件上的电压信号或电流信号。结合上述结构十所示例的光学成像组件,驱动组件可通过改变施加于变焦透镜的静电力或电磁力,改变焦透镜的焦距,从而可实现将虚像调节至目标位置。应理解,变焦透镜的焦距和静电力(或电磁力)之间的关系可以是控制组件确定的。
通过上述虚像位置调节组件对虚像的位置的调节,可以使得用户清晰的观看到显示组件显示的图像,而且可有助于减小辐辏调节冲突。
本申请中,显示模组还可以包括控制组件。
四、控制组件
在一种可能的实现方式中,控制组件例如可以是处理器、微处理器、控制器等控制组件,例如可以是通用中央处理器(central processing unit,CPU),通用处理器,数字信号处理(digital signal processing,DSP),专用集成电路(application specific integrated circuits,ASIC),现场可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。
在一种可能的实现方式中,控制组件执行的功能可参见前述的相关描述,此处不再重复赘述。
在一种可能的实现方式中,可按正常视力下,根据显示组件显示的图像或者在第一界面选择的第一对象,确定虚像的位置;进一步,再根据获取的视力参数,确定虚像的目标位置。也可以理解为,先根据显示组件显示的图像或者在第一界面选择的第一对象对虚像的位置进行调节,之后再根据视力参数精细调节虚像至目标位置。
需要说明的是,为了适应各种视力情况的用户,如下示例性地的示出了四种调节视力的实现方式。
实现方式a,该显示模组不带视力调节功能,可提供较大的眼距(eye relief),用户可以佩戴眼镜使用该显示模组。
实现方式b,该显示模组不带视力调节功能,提供了合适的空间方便用户放置定制的镜片,例如不同度数的近视镜片等。
实现方式c,该显示模组利用被动式液晶PB透镜可以实现进行近视调节。例如,近视调节需要~7D的光焦度,变焦透镜总共需要提供11D的变焦能力(即测量量程),虚像面的调节精度为0.25D,则需要提供44个虚像的位置,对应需要6片被动式液晶PB透镜。
在一种可能的实现方式中,控制组件可以集成在该显示模组上,即控制组件与该显示模组成为一体式设备;也可以分体式利用显示模组所在的终端设备的控制组件。
需要说明的是,显示模组可包括控制组件和存储器,可称为一体机。或者,显示模组也可以不包括控制组件和存储器,可称为分体机。或者,显示模组不包括控制组件也不包括存储器,包括微型处理单元,这类也可称为分体机。
如图19所示,为本申请提供的又一种显示模组的结构示意图。该显示模组包括显示组件1901、光学成像组件1902、虚像位置调节组件1903以及控制组件1904。显示组件 1901、光学成像组件1902、虚像位置调节组件1903以及控制组件1904可分别参见前述相关描述,此处不再重复赘述。
基于上述描述的显示模组的结构和功能原理,本申请还可以提供一种头戴式显示设备,该头戴式显示设备可以包括控制组件和上述任一实施例中的显示模组。可以理解的是,该头戴式显示设备还可以包括其他器件,例如无线通信装置、传感器和存储器等。
基于上述内容和相同的构思,本申请提供一种虚像的位置调节方法,请参阅图20和图21的介绍。该虚像的位置调节方法可应用于上述图3至图19任一实施例所示的显示模组。也可以理解为,可以基于上述图3至图19任一实施例所示的显示模组来实现虚像的位置的调节方法。如下,基于虚像的目标位置是基于显示的图像所属的预设场景类型确定或基于用户选择的对象所属的预设场景类型确定的分别介绍。
情形A,基于图像所属的预设场景类型自适应调节虚像的位置。
如图20所示,为本申请提供的一种虚像的位置调节方法流程示意图。该方法包括以下步骤:
步骤2001,获取显示组件显示的图像。
此处,显示组件显示的图像可参见前述显示组件的相关描述,此处不再重复赘述。
步骤2002,获取图像对应的虚像的目标位置。
此处,获取虚像的目标位置的可能的实现方式可参见前述实现方式一、实现方式二和实现方式三。
其中,虚像的目标位置与图像所属的预设场景类型有关,具体可参见前述相关描述,此处不再重复赘述。
步骤2003,控制虚像位置调节组件调节光学成像组件和/或显示组件,将图像在目标位置形成虚像。
该步骤2003可参见前述调节光学成像组件和/或显示组件的相关描述,此处不再重复赘述。
需要说明的是,上述步骤2001至步骤2003可以是显示模组中的控制组件执行。换言之,图20所示的虚像的位置调节方法所应用的显示模组中包括控制组件。
情形B,基于用户选择的对象所属的预设场景类型调节虚像的位置。
如图21所示,为本申请提供的另一种虚像的位置调节方法流程示意图。该方法包括以下步骤:
步骤2101,显示第一界面。
结合上述图3至图19任一实施例中的显示模组,该步骤2101可以是显示模组中的显示组件执行的,具体可参见前述关于显示组件显示第一界面的相关描述,此处不再重复赘述。
步骤2102,当用户在第一界面中选择第一对象时,获取第一对象对应的虚像的目标位置。
其中,虚像的目标位置与第一对象所属的预设场景类型有关,可参见前述相关描述,此处不再重复赘述。用户在第一界面中选择第一对象、以及获取第一对象对应的虚像的目标位置的方式可参见前述相关描述,此处不再重复赘述。
此处,可基于头戴式显示设备是否包括控制组件,示例性地的示出两种获取第一对象 对应的目标位置的方式。
方式a,头戴式显示设备包括控制组件。
基于该方式a,获取第一对象对应的目标位置可包括如下步骤:
步骤A,控制组件获取第一对象所属的第二预设场景类型。
示例性地,控制组件可接收终端设备发送的第一对象所属的第二预设场景类型;或者,也可以是控制组件确定第一对象所属的第一预设场景类型。
步骤B,控制组件获取预设场景类型与虚像的位置的对应关系。
该步骤B可参见前述图22中的步骤b的相关介绍,此处不再重复赘述。
步骤C,控制组件根据预设场景类型与虚像的位置的对应关系,确定第二预设场景类型对应的目标位置。
此处,可以是从预设场景类型与虚像的位置的对应关系中,查找到第二预设场景类型对应的位置即为目标位置。
方式b,头戴式显示设备不包括控制组件。
基于该方式b,头戴式显示设备可接收终端设备发送的第一对象对应的虚像的目标位置。其中,终端设备确定图像对应的虚像的目标位置可参见下述图24的相关介绍,此处不再重复赘述。
步骤2103,针对选择第一对象后所触发显示组件显示的图像,控制虚像位置调节组件调节光学成像组件和/或显示组件,将图像在目标位置形成虚像。
该步骤2103可参见前述调节光学成像组件和/或显示组件的相关描述,此处不再重复赘述。需要说明的是,该步骤2103可以是显示模组的控制组件执行的,或者,也可以是终端设备执行的。
基于上述内容和相同的构思,本申请提供又一种虚像的位置调节方法,请参阅图22和图23的介绍。该虚像的位置调节方法可应用于头戴式显示设备。如下基于上述情形A和情形B分别进行介绍。
基于上述情形A,本申请提供一种虚像的位置调节方法,请参阅图22的介绍。该虚像的位置调节方法可应用于头戴式显示设备。
如图22所示,为本申请提供的一种虚像的位置调节方法流程示意图。该方法包括以下步骤:
步骤2201,获取头戴式显示设备显示的图像。
此处,可以是接收终端设备发送的图像,或者也可以是头戴式显示设备中的投影系统发射的图像。
步骤2202,获取图像对应的虚像的目标位置。
其中,虚像的目标位置与图像所属的预设场景类型有关。当图像属于不同的预设场景类型时,头戴式显示设备呈现虚像的目标位置不同。示例性地,当图像所属的预设场景类型为会议场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离为0.583D;或者,当图像所属的预设场景类型为交互式游戏场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离为1D;或者,当图像所属的预设场景类型为视频场景类型时,头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离为0.5D。
进一步,可选地,图像所属的预设场景类型可以是图像的内容所属的预设场景类型。 或者,也可以是图像对应的对象所属的预设场景类型,图像对应的应用可以理解为该图像为进入应用后所显示的图像。
下面,基于头戴式显示设备是否包括控制组件,示例性地的示出两种获取图像对应的目标位置的方式。
方式A,基于头戴式显示设备包括控制组件。
在该方式A中,获取图像对应的目标位置可包括如下步骤:
步骤a,控制组件获取头戴式显示设备显示的图像所属的第一预设场景类型。
此处,可以是接收终端设备发送的图像所属的第一预设场景类型;或者也可以是头戴式显示设备确定图像所属的第一预设场景类型(具体确定的过程可参见前述相关描述,此处不再重复赘述)。
步骤b,控制组件获取预设场景类型与虚像的位置的对应关系。
进一步,可选地,头戴式显示设备中还可以包括存储器,预设场景类型与虚像的位置的对应关系可以存储于头戴式显示设备的存储器中。换言之,头戴式显示设备可包括控制组件和存储器,即为一体机。该步骤b获取目标位置的更详细的过程可参见前述实现方式一中的相关描述。
应理解,头戴式显示设备也可以不包括存储器。预设场景类型与虚像的位置的对应关系可以存储于头戴式设备之外的存储器中,例如终端设备的存储器中,头戴式显示设备可以通过调用终端设备的存储器来获取预设场景类型与虚像的位置的对应关系。
步骤c,控制组件根据预设场景类型与虚像的位置的对应关系,确定第一预设场景类型对应的目标位置。
此处,可以是从预设场景类型与虚像的位置的对应关系中,查找到第一预设场景类型对应的位置即为目标位置。
方式B,基于头戴式显示设备不包括控制组件。
在该方式B,可接收终端设备发送的该图像所对应的虚像的目标位置。其中,终端设备确定图像对应的虚像的目标位置的过程可参见下述图24的相关介绍,此处不再重复赘述。
步骤2203,将图像在目标位置形成虚像。
在一种可能的实现方式中,该步骤2203可以由头戴式显示设备中控制组件控制虚像位置调节组件实现;或者,也可以由终端设备控制虚像位置调节组件实现。
如下,示例性地的示出了四种将图像在目标位置形成虚像的可能的实现方式。
实现方式1,头戴式显示设备确定显示组件和/或光学成像组件的待移动距离。
基于该实现方式1,头戴式显示设备包括显示组件和光学成像组件。具体地,可获取显示组件与光学成像组件之间的第一距离,并根据第一距离和目标位置,确定显示组件和/或光学成像组件的待移动距离,再根据待移动距离,驱动显示组件和/或光学成像组件移动,将虚像调节至目标位置。更详细的描述可参见前述相关描述,此处不再重复赘述。
实现方式2,头戴式显示设备接收终端设备发送的显示组件和/或光学成像组件的待移动距离。
基于该实现方式2,头戴式显示设备包括显示组件和光学成像组件。具体地,可接收终端设备发送的显示组件和/或光学成像组件的待移动的距离,根据待移动的距离,驱动显示组件和/或光学成像组件移动,将虚像调节至目标位置。终端设备确定的显示组件和/或 光学成像组件的待移动的距离可参见下述图24的相关介绍。更详细的描述可参见前述相关描述,此处不再重复赘述。
实现方式3,头戴式显示设备确定变焦透镜的待调节焦距。
基于该实现方式3,头戴式显示设备可包括显示组件和光学成像组件,光学成像组件包括变焦透镜。具体地,可先确定变焦透镜的第一焦距;根据第一焦距和目标位置,确定变焦透镜的待调节焦距;并根据待调节焦距,改变施加于变焦透镜的电压信号或电流信号,将虚像调节至目标位置。
实现方式4,头戴式显示设备接收终端设备发送的变焦透镜的待调节焦距。
基于该实现方式4,头戴式显示设备包括显示组件和光学成像组件,光学成像组件包括变焦透镜;可接收终端设备发送的变焦透镜的待调节焦距;根据待调节焦距,改变施加于变焦透镜的电压信号或电流信号,将虚像调节至目标位置。
基于上述情形B,本申请提供另一种虚像的位置调节方法,请参阅图23的介绍。该虚像的位置调节方法可应用于头戴式显示设备。
如图23所示,为本申请提供的另一种虚像的位置调节方法流程示意图。该方法包括以下步骤:
步骤2301,显示第一界面。
该步骤2301可参见前述步骤2101的介绍,此处不再重复赘述。
步骤2302,当用户在第一界面中选择第一对象时,获取第一对象对应的虚像的目标位置。
此处,虚像的目标位置与第一对象所属的预设场景类型有关。该步骤2302可参见前述步骤2102的相关介绍,此处不再重复赘述。
步骤2303,针对选择第一对象后所触发显示的图像,将图像在目标位置形成虚像。
该步骤2303可参见前述步骤2203的介绍,此处不再重复赘述。
需要说明的是,该步骤2103可以是显示模组的控制组件执行的,或者,也可以是终端设备执行的。
基于上述图22,在头戴式显示设备不包括控制组件时,终端设备可控制头戴式显示设备进行虚像的位置调节。如图24所示,为本申请提供的又一种虚像的位置调节方法,该方法可应用于终端设备。该方法可包括如下步骤:
步骤2401,确定头戴式显示设备显示的图像所属的第一预设场景类型。
其中,头戴式显示设备显示的图像可以是终端设备传播至头戴式显示设备。也可以理解为,终端设备可向头戴式显示设备传播携带该图像信息的光束,以使得头戴式显示设备显示图像。具体确定的可能实现方式可参见前述相关描述,此处不再重复赘述。
步骤2402,获取预设场景类型与虚像的位置的对应关系。
在一种可能的实现方式中,若预设场景类型与虚像的位置的对应关系存储于头戴式显示设备的存储器中,则终端设备可接收头戴式显示设备发送的该预设场景类型与虚像的位置的对应关系,即终端设备可从头戴式设备中调用该预设场景类型与虚像的位置的对应关系。若预设场景类型与虚像的位置的对应关系存储于终端设备,则可以直接从终端设备的存储器中读取该对应关系。关于预设场景类型与虚像的位置的对应关系可参见前述相关描述,此处不再重复赘述。
步骤2403,根据预设场景类型与虚像的位置的对应关系,确定第一预设场景类型对应的头戴式显示设备呈现虚像的目标位置。
其中,虚像的目标位置与图像所属的预设场景类型有关。详细的描述可参见前述相关描述,此处不再重复赘述。
步骤2404,根据目标位置,控制头戴式显示设备将图像在目标位置形成虚像。
如下,示例性地的示出了两种控制头戴式显示设备将图像在目标位置形成虚像的方法。
方法1.1,向头戴式显示设备发送第一控制指令。
在一种可能的实现方式中,获取头戴式显示设备中的显示组件与光学成像组件之间的第一距离;根据第一距离和目标位置,确定显示组件和/或光学成像组件的待移动距离;根据待移动距离生成第一控制指令,并向头戴式显示设备发送第一控制指令,第一控制指令用于控制显示组件和/或光学成像组件移动,将虚像调节至目标位置。
进一步,可选地,可接收头戴式显示设备中的虚像位置调节组件发送的光学成像组件和/或显示组件的位置,并根据光学成像组件和/或显示组件的位置,确定第一距离(可参见前述图17b和图17c);或者,也可以直接确定显示组件与光学成像组件之间的第一距离(可参见前述图17a)。
方法1.2,向头戴式显示设备发送第二控制指令。
在一种可能的实现方式中,获取头戴式显示设备中的光学成像组件的第一焦距,根据第一焦距和目标位置,确定光学成像组件的待调节焦距;根据待调节焦距生成第二控制指令,并向头戴式显示设备发送第二控制指令,第二控制指令用于控制施加于光学成像组件的电压信号或电流信号,调节光学成像组件的焦距,将虚像调节至目标位置。
基于上述图23,在头戴式显示设备不包括控制组件时,终端设备可控制头戴式显示设备进行虚像的位置调节。如图25所示,为本申请提供的又一种虚像的位置调节方法,该方法可应用于终端设备。该方法可包括如下步骤:
步骤2501,获取用户在头戴式显示设备显示的第一界面中选择的第一对象。
在一种可能的实现方式中,可以是头戴式显示设备检测到用户在第一界面选择第一对象后,向终端设备发送选择的第一对象的标识。第一对象标识可以是终端设备和头戴式显示设备预先约定;或者也可以是头戴式显示指示给终端设备的;或者也可以是终端设备中预先存储有对象标识与对象的对应关系。
步骤2502,获取第一对象所属的第二预设场景类型。
在一种可能的实现方式中,可预先存储对象与预设场景类型的关系,进而可从该对象与预设场景的对应关系中,确定出第一对象所属的第二预设场景类型。
步骤2503,获取预设场景类型与虚像的位置的对应关系。
在一种可能的实现方式中,若预设场景类型与虚像的位置的对应关系存储于头戴式显示设备的存储器中,则终端设备可接收头戴式显示设备发送的该对应关系,并从该对应关系中确定出第一对象所属的第二预设场景类型。若预设场景类型与虚像的位置的对应关系存储于终端设备,则可以直接从终端设备的存储器中读取该对应关系,并从该对应关系中确定出第一对象所属的第二预设场景类型。预设场景类型与虚像的位置的对应关系可参见前述相关描述,此处不再重复赘述。
步骤2504,根据预设场景类型与虚像的位置的对应关系,确定第二预设场景类型对应 的头戴式显示设备呈现虚像的目标位置。
其中,虚像的目标位置与第一对象所属的预设场景类型有关。该步骤2504可参见上述步骤2302的相关描述。
步骤2505,根据目标位置,控制头戴式显示设备将图像在目标位置形成虚像。
该步骤2505可参见前述步骤2404的相关介绍,此处不再重复赘述。
应理解,当头戴式显示设备包括控制组件时,头戴式显示设备显示的图像也可以是终端设备传播至头戴式显示设备的。
基于上述内容和相同的构思,本申请提供又一种虚像的位置调节方法,请参阅图26。该虚像的位置调节方法可应用于头戴式显示设备。该方法包括以下步骤:
步骤2601,确定虚像位置调节组件的工作模式;若确定的工作模式为自动模式,则执行步骤2603至步骤2605;若确定工作模式为手动模式,则执行步骤2606至步骤2608。
步骤2602,显示第一界面。
该步骤2602可参见前述相关描述,此处不再重复赘述。
步骤2603,当用户在所述第一界面中选择第一对象时,根据获取的视力参数和第一对象所属的第二预设场景类型,确定虚像的目标位置。
步骤2604,根据所述目标位置,确定虚像位置调节组件的调焦参数。
其中,调焦参数例如前述的光学成像组件和/或所述显示组件之间待移动距离、施加于变焦透镜的电压信号或电流信号、第一衍射光学元件与第二衍射光学元件之间的待转动角度、第一折射光学元件与第二折射光学元件之间沿垂直于主光轴的方向的待移动距离,具体可参见前述相关描述,此处不再赘述。
步骤2605,根据调焦参数,将虚像调节至目标位置。
该步骤2605可参见前述相关描述,此处不再重复赘述。
步骤2606,当用户在所述第一界面中选择第一对象时,可在第一界面显示提示信息。
该提示信息可用于提示用户进行虚像的位置调节。示例性地,该提示信息可提示该第一对象所属的预设场景类型。
步骤2607,用户可根据该提示信息通过凸轮调焦机构选择第一对象所属的预设场景类型,并对虚像的位置进行调节。
此处,用户可旋转凸轮调焦机构的第一旋钮,以选择出该预设场景类型。在通过第一旋钮旋转选择第一对象所属的预设场景类型时,可带动导向柱(或导向筒)带动光学成像组件移动,已调节虚像所处的位置。
步骤2608,用户可根据视力参数通过凸轮调焦机构的第二旋钮,将虚像调节至目标位置。
上述步骤2607和步骤2608更详细的描述可分别参见前述相关内容,此处不再重复赘述。
步骤2609,对图像进行渲染,并显示。
可以理解的是,为了实现上述实施例中功能,头戴式显示设备和终端设备包括了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本申请中所公开的实施例描述的各示例的模块及方法步骤,本申请能够以硬件或硬件和计算机软件相结合的形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取 决于技术方案的特定应用场景和设计约束条件。
基于上述内容和相同构思,图27和图28为本申请的提供的可能的虚像的位置调节装置的结构示意图。这些虚像的位置调节装置可以用于实现上述方法实施例中显示模组的功能,因此也能实现上述方法实施例所具备的有益效果。在本申请中,该虚像的位置调节装置可以包括上述图3至附图18c中的显示模组,该虚像的位置调节装置可应用于头戴式显示设备。
如图27所示,该虚像的位置调节装置2700包括获取模块2701和虚像形成模块2702。虚像的位置调节装置2700用于实现上述图22所示的方法实施例中显示模组的功能时:获取模块2701用于获取所述头戴式显示设备显示的图像、以及获取所述图像对应的虚像的目标位置,所述虚像的目标位置与所述图像所属的预设场景类型有关;虚像形成模块2702用于将所述图像在所述目标位置形成虚像。
有关上述获取模块2701和虚像形成模块2702更详细的描述可以参考图22所示的方法实施例中相关描述直接得到,此处不再一一赘述。
如图28所示,该虚像的位置调节装置2800包括显示模块2801、获取模块2802和虚像形成模块2803。虚像的位置调节装置2800用于实现上述图23所示的方法实施例中显示模组的功能时:显示模块2801用于显示第一界面;获取模块2802用于当用户在所述第一界面中选择第一对象时,获取所述第一对象对应的虚像的目标位置,所述虚像的目标位置与所述第一对象所属的预设场景类型有关;虚像形成模块2803用于针对选择所述第一对象后所触发显示的图像,将所述图像在所述目标位置形成虚像。
有关上述显示模块2801、获取模块2802和虚像形成模块2803更详细的描述可以参考图23所示的方法实施例中相关描述直接得到,此处不再一一赘述。
基于上述内容和相同构思,图29和图30为本申请的提供的可能的终端设备的结构示意图。这些终端设备可以用于实现上述方法实施例中终端设备的功能,因此也能实现上述方法实施例所具备的有益效果。
如图29所示,为本申请提供的一种终端设备的结构示意图。如图29所示,该终端设备2900包括确定模块2901、获取模块2902和控制模块2903。终端设备2900用于实现上述图24所示的方法实施例中终端设备的功能时:确定模块2901用于确定图像所属的第一预设场景类型,所述图像用于头戴式显示设备显示;获取模块2902用于获取预设场景类型与虚像的位置的对应关系;确定模块2901还用于根据所述预设场景类型与虚像的位置的对应关系,确定所述第一预设场景类型对应的所述头戴式显示设备呈现虚像的目标位置,所述虚像的目标位置与所述图像所属的预设场景类型有关;控制模块2903用于根据所述目标位置,控制所述头戴式显示设备将所述图像在所述目标位置形成虚像。
有关上述确定模块2901、获取模块2902和控制模块2903更详细的描述可以参考图24所示的方法实施例中相关描述直接得到,此处不再一一赘述。
如图30所示,为本申请提供的一种终端设备的结构示意图。如图30所示,该终端设备3000包括确定模块3001、获取模块3002和控制模块3003。终端设备3000用于实现上述图25所示的方法实施例中终端设备的功能时:获取模块3002用于获取用户在头戴式显示设备显示的第一界面中选择的第一对象;获取所述第一对象所属的第二预设场景类型;获取预设场景类型与虚像的位置的对应关系;确定模块3001用于根据所述预设场景类型与虚像的位置的对应关系,确定所述第二预设场景类型对应的所述头戴式显示设备呈现虚 像的目标位置,所述虚像的目标位置与所述第一对象所属的预设场景类型有关;控制模块3003用于根据所述目标位置,控制所述头戴式显示设备将所述图像在所述目标位置形成虚像。
有关上述确定模块3001、获取模块3002和控制模块3003更详细的描述可以参考图25所示的方法实施例中相关描述直接得到,此处不再一一赘述。
在一种可能的实现方式中,终端设备可以是手机、或平板电脑等。
本申请的实施例中的方法步骤可以通过硬件的方式来实现,也可以由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(random access memory,RAM)、闪存、只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于头戴式显示设备或终端设备中。当然,处理器和存储介质也可以作为分立组件存在于头戴式显示设备或终端设备中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序或指令。在计算机上加载和执行所述计算机程序或指令时,全部或部分地执行本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、网络设备、用户设备或者其它可编程装置。所述计算机程序或指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机程序或指令可以从一个网站站点、计算机、服务器或数据中心通过有线或无线方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是集成一个或多个可用介质的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,例如,软盘、硬盘、磁带;也可以是光介质,例如,数字视频光盘(digital video disc,DVD);还可以是半导体介质,例如,固态硬盘(solid state drive,SSD)。
在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。“以下至少一项(个)”或其类似表达,是指这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”,其中a,b,c可以是单个,也可以是多个。在本申请的文字描述中,字符“/”,一般表示前后关联对象是一种“或”的关系。在本申请的公式中,字符“/”,表示前后关联对象是一种“相除”的关系。本申请中,符号“(a,b)”表示开区间,范围为大于a且小于b;“[a,b]”表示闭区间,范围为大于或等于a且小于或等于b;“(a,b]”表示半开 半闭区间,范围为大于a且小于或等于b;“(a,b]”表示半开半闭区间,范围为大于a且小于或等于b。另外,在本申请中,“示例的”一词用于表示作例子、例证或说明。本申请中被描述为“示例”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。或者可理解为,使用示例的一词旨在以具体方式呈现概念,并不对本申请构成限定。
可以理解的是,在本申请中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定。术语“第一”、“第二”等类似表述,是用于分区别类似的对象,而不必用于描述特定的顺序或先后次序。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元。方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
尽管结合具体特征及其实施例对本申请进行了描述,显而易见的,在不脱离本申请的精神和范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的方案进行示例性说明,且视为已覆盖本申请范围内的任意和所有修改、变化、组合或等同物。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。
Claims (109)
- 一种显示模组,其特征在于,包括显示组件、光学成像组件和虚像位置调节组件;所述显示组件,用于显示图像;所述光学成像组件,用于将所述图像形成虚像;所述虚像位置调节组件,用于调节所述光学成像组件和/或所述显示组件,将所述虚像调节至目标位置,所述虚像的目标位置与所述图像所属的预设场景类型有关。
- 如权利要求1所述的显示模组,其特征在于,当所述图像属于不同的预设场景类型时,所述显示模组呈现虚像的目标位置不同。
- 如权利要求1或2所述的显示模组,其特征在于,所述显示模组还包括控制组件;所述控制组件,用于获取所述虚像的目标位置,并控制所述虚像位置调节组件调节所述光学成像组件和/或所述显示组件,将所述虚像调节至所述目标位置。
- 如权利要求3所述的显示模组,其特征在于,所述控制组件,用于:获取所述显示组件显示的所述图像所属的第一预设场景类型;获取预设场景类型与虚像的位置的对应关系;根据所述预设场景类型与虚像的位置的对应关系,确定所述第一预设场景类型对应的所述目标位置。
- 如权利要求3所述的显示模组,其特征在于,所述控制组件,用于:获取视力参数;获取所述显示组件显示的所述图像所属的第一预设场景类型;获取预设场景类型与虚像的位置的对应关系;根据所述视力参数、以及所述预设场景类型与虚像的位置的对应关系,确定所述第一预设场景类型对应的所述目标位置。
- 如权利要求1至5任一项所述的显示模组,其特征在于,所述预设场景类型包括以下至少一项:办公场景类型、阅读场景类型、会议场景类型、交互式游戏场景类型或视频场景类型。
- 如权利要求6所述的显示模组,其特征在于,当所述图像所属的预设场景类型为所述办公场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.1~10]屈光度D;或者,当所述图像所属的预设场景类型为所述阅读场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.5~10]屈光度D;或者,当所述图像所属的预设场景类型为所述会议场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.1~7.1]屈光度D;或者,当所述图像所属的预设场景类型为所述交互式游戏场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.5~7.5]屈光度D;或者,当所述图像所属的预设场景类型为所述视频场景类型时,所述头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7]屈光度D。
- 如权利要求6所述的显示模组,其特征在于,所述视频场景类型对应的虚像的目标位置与所述光学成像组件之间的距离大于所述会议场景类型对应的虚像的目标位置与所述光学成像组件之间的距离;或者,所述会议场景类型对应的虚像的目标位置与所述光学成像组件之间的距离大于所述阅读场景类型对应的虚像的目标位置与所述光学成像组件之间的距离。
- 如权利要求1至8任一项所述的显示模组,其特征在于,所述图像所属的预设场景类型包括以下任一项:所述图像的内容所属的预设场景类型;或者,所述图像对应的对象所属的预设场景类型。
- 如权利要求1至9任一项所述的显示模组,其特征在于,所述虚像位置调节组件包括驱动组件;所述驱动组件,用于驱动所述光学成像组件和/或所述显示组件移动,将所述虚像调节至所述目标位置。
- 如权利要求1至9任一项所述的显示模组,其特征在于,所述虚像位置调节组件包括驱动组件和位置传感组件;所述位置传感组件,用于:确定所述光学成像组件和/或所述显示组件的位置,所述光学成像组件和/或所述显示组件的位置用于确定所述显示组件与所述光学成像组件之间的第一距离,所述第一距离用于确定所述光学成像组件和/或所述显示组件的待移动距离;或者,确定所述光学成像组件和/或所述显示组件之间的第一距离;所述驱动组件,用于:根据所述待移动的距离,驱动所述光学成像组件和/或所述显示组件移动,将所述虚像调节至所述目标位置。
- 如权利要求11所述的显示模组,其特征在于,所述虚像位置调节组件的调节精度是根据所述驱动组件的驱动误差和所述位置传感组件的位置测量误差确定的。
- 如权利要求12所述的显示模组,其特征在于,所述虚像位置调节组件的调节精度不大于0.2屈光度D。
- 如权利要求11至14任一项所述的显示模组,其特征在于,所述虚像位置调节组件的调节范围是根据所述驱动组件的驱动量程和所述位置传感组件的测量量程确定的。
- 如权利要求15所述的显示模组,其特征在于,所述虚像位置调节组件的调节范围不小于5屈光度D。
- 如权利要求10至17任一项所述的显示模组,其特征在于,所述驱动组件为凸轮调焦机构、步进马达、超声马达、直流马达或音圈马达。
- 如权利要求1至9任一项所述的显示模组,其特征在于,所述虚像位置调节组件包括驱动组件,所述光学成像组件包括变焦透镜;所述驱动组件,用于:改变施加于所述变焦透镜的电压信号或电流信号,改变所述变焦透镜的焦距,将所述虚像调节至所述目标位置。
- 如权利要求1至9任一项所述的显示模组,其特征在于,所述虚像位置调节组件包括驱动组件和位置传感组件,所述光学成像组件包括变焦透镜;所述位置传感组件,用于:确定所述变焦透镜的第一焦距,所述第一焦距用于确定所述变焦透镜的待调节焦距;所述驱动组件,用于:根据所述待调节焦距,改变施加于所述变焦透镜的电压信号或电流信号,将所述虚像调节至所述目标位置。
- 如权利要求19或20所述的显示模组,其特征在于,所述变焦透镜为液晶透镜、液体透镜或几何相位透镜。
- 如权利要求1至9任一项所述的显示模组,其特征在于,所述虚像位置调节组件包括驱动组件和位置传感组件;所述光学成像组件包括第一衍射光学元件和第二衍射光学元件;所述位置传感组件,用于:确定所述第一衍射光学元件和所述第二衍射光学元件的相对角度,所述第一衍射光学元件和所述第二衍射光学元件的相对角度用于确定所述第一衍射光学元件和/或所述第二衍射光学元件待转动角度;所述驱动组件,用于:根据所述待转动角度,驱动所述第一衍射光学元件和/或所述第二衍射光学元件转动,将所述虚像调节至所述目标位置。
- 如权利要求1至9任一项所述的显示模组,其特征在于,所述虚像位置调节组件包括驱动组件和位置传感组件;所述光学成像组件包括第一折射光学元件和第二折射光学元件;所述位置传感组件,用于:在垂直于第一折射光学元件和第二折射光学元件的主光轴的方向上,确定所述第一折射光学元件和所述第二折射光学元件的之间的第一距离,所述第一距离用于确定所述第一折射光学元件和/或所述第二折射光学元件待移动距离;所述驱动组件,用于:根据所述待移动距离,驱动所述第一折射光学元件和/或所述第二折射光学元件在垂直于所述主光轴的方向移动,将所述虚像调节至所述目标位置。
- 如权利要求1至9任一项所述的显示模组,其特征在于,所述显示模组还包括眼动 追踪组件;所述眼动追踪组件,用于确定双目注视所述图像的会聚深度;所述虚像位置调节组件,用于:根据所述会聚深度,驱动所述成像光学组件和/或所述显示组件移动,将所述虚像调节至所述目标位置。
- 如权利要求1至24任一项所述的显示模组,其特征在于,所述虚像的目标位置与人眼之间的距离与人眼的双目会聚深度的差值的绝对值小于阈值。
- 如权利要求25所述的显示模组,其特征在于,所述阈值范围为[0屈光度D,1屈光度D]。
- 一种虚像的位置调节方法,其特征在于,所述方法应用于头戴式显示设备;所述方法包括:获取所述头戴式显示设备显示的图像;获取所述图像对应的虚像的目标位置,所述虚像的目标位置与所述图像所属的预设场景类型有关;将所述图像在所述目标位置形成虚像。
- 如权利要求27所述的方法,其特征在于,当所述图像属于不同的预设场景类型时,所述头戴式显示设备呈现虚像的目标位置不同。
- 如权利要求28所述的方法,其特征在于,所述预设场景类型包括以下至少一项:办公场景类型、阅读场景类型、会议场景类型、交互式游戏场景类型或视频场景类型。
- 如权利要求28或29所述的方法,其特征在于,当所述图像所属的预设场景类型为所述办公场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.1~10]屈光度D;或者,当所述图像所属的预设场景类型为所述阅读场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.5~10]屈光度D;或者,当所述图像所属的预设场景类型为所述会议场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.1~7.1]屈光度D;或者,当所述图像所属的预设场景类型为所述交互式游戏场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.5~7.5]屈光度D;或者,当所述图像所属的预设场景类型为所述视频场景类型时,所述头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7]屈光度D。
- 如权利要求27至30任一项所述的方法,其特征在于,所述图像所属的预设场景类型包括以下任一项:所述图像的内容所属的预设场景类型;或者,所述图像对应的对象所属的预设场景类型。
- 如权利要求27至31任一项所述的方法,其特征在于,所述获取所述图像对应的虚像的目标位置,包括:获取所述头戴式显示设备显示的图像所属的第一预设场景类型;获取预设场景类型与虚像的位置的对应关系;根据所述预设场景类型与虚像的位置的对应关系,确定所述第一预设场景类型对应的所述目标位置。
- 如权利要求27所述的方法,其特征在于,所述获取所述头戴式显示设备显示的图像所属的第一预设场景类型,包括:接收终端设备发送的所述图像所属的所述第一预设场景类型;或者,确定所述图像所属的所述第一预设场景类型。
- 如权利要求27至31任一项所述的方法,其特征在于,所述获取所述图像对应的虚像的目标位置,包括:接收终端设备发送的所述图像对应的虚像的所述目标位置。
- 如权利要求27至34任一项所述的方法,其特征在于,所述头戴式显示设备包括显示组件和光学成像组件;所述将所述图像在所述目标位置形成虚像,包括:获取所述显示组件与所述光学成像组件之间的第一距离;根据所述第一距离和所述目标位置,确定所述显示组件和/或所述光学成像组件的待移动距离;根据所述待移动距离,驱动所述显示组件和/或所述光学成像组件移动,将所述虚像调节至所述目标位置。
- 如权利要求27至34任一项所述的方法,其特征在于,所述头戴式显示设备包括显示组件和光学成像组件;所述将所述图像在所述目标位置形成虚像,包括:接收终端设备发送的所述显示组件和/或所述光学成像组件的待移动的距离;根据所述待移动的距离,驱动所述显示组件和/或所述光学成像组件移动,将所述虚像调节至所述目标位置。
- 如权利要求27至34任一项所述的方法,其特征在于,所述头戴式显示设备包括显示组件和光学成像组件,所述光学成像组件包括变焦透镜;所述将所述图像在所述目标位置形成虚像,包括:确定所述变焦透镜的第一焦距;根据所述第一焦距和所述目标位置,确定所述变焦透镜的待调节焦距;根据所述待调节焦距,改变施加于所述变焦透镜的电压信号或电流信号,将所述虚像调节至所述目标位置。
- 如权利要求27至34任一项所述的方法,其特征在于,所述头戴式显示设备包括显示组件和光学成像组件,所述光学成像组件包括变焦透镜;所述将所述图像在所述目标位置形成虚像,包括:接收终端设备发送的所述变焦透镜的待调节焦距;根据所述待调节焦距,改变施加于所述变焦透镜的电压信号或电流信号,将所述虚像调节至所述目标位置。
- 如权利要求27至38任一项所述的方法,其特征在于,所述获取所述图像对应的虚像的目标位置,包括:获取视力参数;获取所述显示组件显示的所述图像所属的第一预设场景类型;获取预设场景类型与虚像的位置的对应关系;根据所述视力参数、以及所述预设场景类型与虚像的位置的对应关系,确定所述第一 预设场景类型对应的所述目标位置。
- 如权利要求27至39任一项所述的方法,其特征在于,所述虚像的目标位置与人眼之间的距离与人眼的双目会聚深度的差值的绝对值小于阈值。
- 如权利要求40所述的方法,其特征在于,所述阈值范围为[0屈光度D,1屈光度D]。
- 如权利要求27至41任一项所述的方法,其特征在于,所述方法还包括:确定所述虚像位置调节组件的工作模式,所述工作模式包括自动模式和手动模式,所述自动模式为驱动组件根据待移动距离或电压信号或电流信号,将所述虚像调节至所述目标位置;所述手动模式为用户通过旋转凸轮调焦机构将所述虚像调节至所述目标位置。
- 如权利要求32或39所述的方法,其特征在于,所述获取预设场景类型与虚像的位置的对应关系,包括:获取M个预设场景及所述M个预设场景分别对应的虚像的位置,所述M为大于1的整数;统计所述M个预设场景与所述M个预设场景分别对应的虚像的位置的分布关系;根据所述分布关系,确定所述预设场景与虚像的位置的对应关系。
- 如权利要求32或39所述的方法,其特征在于,所述获取预设场景与虚像的位置的对应关系,包括:获取M个预设场景及所述M个预设场景分别对应的虚像的位置,所述M为大于1的整数;将所述M个预设场景与所述M个预设场景分别对应的虚像的位置的输入人工智能算法,得到所述预设场景与虚像的位置的对应关系。
- 如权利要求43或44所述的方法,其特征在于,所述获取M个预设场景及所述M个预设场景分别对应的虚像的位置,包括:接收用户输入的所述M个预设场景对应的虚像的位置;或者,获取M个预设场景中的图像的双目视差,分别根据所述M个预设场景中的图像的双目视场,确定所述M个预设场景对应的虚像的位置。
- 一种虚像的位置调节方法,其特征在于,应用于终端设备,所述方法包括:确定图像所属的第一预设场景类型,所述图像用于头戴式显示设备显示;获取预设场景类型与虚像的位置的对应关系;根据所述预设场景类型与虚像的位置的对应关系,确定所述第一预设场景类型对应的所述头戴式显示设备呈现虚像的目标位置,所述虚像的目标位置与所述图像所属的预设场景类型有关;根据所述目标位置,控制所述头戴式显示设备将所述图像在所述目标位置形成虚像。
- 如权利要求46所述的方法,其特征在于,所述控制所述头戴式显示设备将所述图像在所述目标位置形成虚像,包括:获取所述头戴式显示设备中的显示组件与光学成像组件之间的第一距离;根据所述第一距离和所述目标位置,确定所述显示组件和/或所述光学成像组件的待移动距离;根据所述待移动距离生成第一控制指令,并向所述头戴式显示设备发送所述第一控制指令,所述第一控制指令用于控制所述显示组件和/或所述光学成像组件移动,将所述虚像 调节至所述目标位置。
- 如权利要求47所述的方法,其特征在于,所述获取所述头戴式显示设备中的显示组件与光学成像组件之间的第一距离,包括:接收所述头戴式显示设备中的虚像位置调节组件发送的所述光学成像组件和/或所述显示组件的位置;根据所述光学成像组件和/或所述显示组件的位置,确定所述第一距离。
- 如权利要求46所述的方法,其特征在于,所述控制所述头戴式显示设备将所述图像在所述目标位置形成虚像,包括:获取所述头戴式显示设备中的光学成像组件的第一焦距;根据所述第一焦距和所述目标位置,确定所述光学成像组件的待调节焦距;根据所述待调节焦距生成第二控制指令,并向所述头戴式显示设备发送所述第二控制指令,所述第二控制指令用于控制施加于所述光学成像组件的电压信号或电流信号,调节所述光学成像组件的焦距,将所述虚像调节至所述目标位置。
- 一种虚像的位置调节方法,其特征在于,所述方法应用于头戴式显示设备;所述方法包括:显示第一界面;当用户在所述第一界面中选择第一对象时,获取所述第一对象对应的虚像的目标位置,所述虚像的目标位置与所述第一对象所属的预设场景类型有关;针对选择所述第一对象后所触发显示的图像,将所述图像在所述目标位置形成虚像。
- 如权利要求50所述的方法,其特征在于,当所述第一对象属于不同的预设场景类型时,所述头戴式显示设备呈现虚像的目标位置不同。
- 如权利要求51所述的方法,其特征在于,所述预设场景类型包括以下至少一项:办公场景类型、阅读场景类型、会议场景类型、交互式游戏场景类型或视频场景类型。
- 如权利要求51或52所述的方法,其特征在于,当所述图像所属的预设场景类型为所述办公场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.1~10]屈光度D;或者,当所述图像所属的预设场景类型为所述阅读场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.5~10]屈光度D;或者,当所述图像所属的预设场景类型为所述会议场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.1~7.1]屈光度D;或者,当所述图像所属的预设场景类型为所述交互式游戏场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.5~7.5]屈光度D;或者,当所述图像所属的预设场景类型为所述视频场景类型时,所述头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7]屈光度D。
- 如权利要求50至53任一项所述的方法,其特征在于,所述第一对象为应用。
- 如权利要求50至54任一项所述的方法,其特征在于,所述获取所述第一对象对应的虚像的目标位置,包括:获取所述第一对象所属的第二预设场景类型;获取预设场景类型与虚像的位置的对应关系;根据所述预设场景类型与虚像的位置的对应关系,确定所述第二预设场景类型对应的 所述目标位置。
- 如权利要求50至54任一项所述的方法,其特征在于,所述获取所述图像对应的虚像的目标位置,包括:接收终端设备发送的所述第一对象对应的虚像的所述目标位置。
- 如权利要求50至56任一项所述的方法,其特征在于,所述头戴式显示设备包括显示组件和光学成像组件;所述针对选择所述第一对象后所触发显示的图像,将所述图像在所述目标位置形成虚像,包括:获取所述显示组件与所述光学成像组件之间的第一距离;根据所述第一距离和所述目标位置,确定所述显示组件和/或所述光学成像组件的待移动距离;根据所述待移动距离,驱动所述显示组件和/或所述光学成像组件移动,将所述虚像调节至所述目标位置。
- 如权利要求50至56任一项所述的方法,其特征在于,所述头戴式显示设备包括显示组件和光学成像组件;所述针对选择所述第一对象后所触发显示的图像,将所述图像在所述目标位置形成虚像,包括:接收终端设备发送的所述显示组件和/或所述光学成像组件的待移动的距离;根据所述待移动的距离,驱动所述显示组件和/或所述光学成像组件移动,将所述虚像调节至所述目标位置。
- 如权利要求50至56任一项所述的方法,其特征在于,所述头戴式显示设备包括显示组件和光学成像组件,所述光学成像组件包括变焦透镜;所述针对选择所述第一对象后所触发显示的图像,将所述图像在所述目标位置形成虚像,包括:确定所述变焦透镜的第一焦距;根据所述第一焦距和所述目标位置,确定所述变焦透镜的待调节焦距;根据所述待调节焦距,改变施加于所述变焦透镜的电压信号或电流信号,将所述虚像调节至所述目标位置。
- 如权利要求50至56任一项所述的方法,其特征在于,所述头戴式显示设备包括显示组件和光学成像组件,所述光学成像组件包括变焦透镜;所述针对选择所述第一对象后所触发显示的图像,将所述图像在所述目标位置形成虚像,包括:接收终端设备发送的所述变焦透镜的待调节焦距;根据所述待调节焦距,改变施加于所述变焦透镜的电压信号或电流信号,将所述虚像调节至所述目标位置。
- 如权利要求50至54任一项所述的方法,其特征在于,所述获取所述第一对象对应的虚像的目标位置,包括:获取视力参数;获取所述第一对象所属的第二预设场景类型;获取预设场景类型与虚像的位置的对应关系;根据所述视力参数、以及所述预设场景类型与虚像的位置的对应关系,确定所述第二预设场景类型对应的所述目标位置。
- 如权利要求50至61任一项所述的方法,其特征在于,所述虚像的目标位置与人眼之间的距离与人眼的双目会聚深度的差值的绝对值小于阈值。
- 如权利要求62所述的方法,其特征在于,所述阈值范围[0屈光度D,1屈光度D]。
- 如权利要求50至63任一项所述的方法,其特征在于,所述方法还包括:确定所述虚像位置调节组件的工作模式,所述工作模式包括自动模式和手动模式,所述自动模式为驱动组件根据待移动距离或电压信号或电流信号,将所述虚像调节至所述目标位置;所述手动模式为用户通过旋转凸轮调焦机构将所述虚像调节至所述目标位置。
- 如权利要求55或61所述的方法,其特征在于,所述获取预设场景类型与虚像的位置的对应关系,包括:获取M个预设场景及所述M个预设场景分别对应的虚像的位置,所述M为大于1的整数;统计所述M个预设场景与所述M个预设场景分别对应的虚像的位置的分布关系;根据所述分布关系,确定所述预设场景与虚像的位置的对应关系。
- 如权利要求55或61所述的方法,其特征在于,所述获取预设场景与虚像的位置的对应关系,包括:获取M个预设场景及所述M个预设场景分别对应的虚像的位置,所述M为大于1的整数;将所述M个预设场景与所述M个预设场景分别对应的虚像的位置的输入人工智能算法,得到所述预设场景与虚像的位置的对应关系;获取用户输入的M个预设场景对应的虚像的位置,所述M为大于1的整数;获取M个预设场景中图像的双目视差,计算得到M个预设场景中虚像的位置。
- 一种虚像的位置调节方法,其特征在于,应用于终端设备,所述方法包括:获取用户在头戴式显示设备显示的第一界面中选择的第一对象;获取所述第一对象所属的第二预设场景类型;获取预设场景类型与虚像的位置的对应关系;根据所述预设场景类型与虚像的位置的对应关系,确定所述第二预设场景类型对应的所述头戴式显示设备呈现虚像的目标位置,所述虚像的目标位置与所述第一对象所属的预设场景类型有关;根据所述目标位置,控制所述头戴式显示设备将所述图像在所述目标位置形成虚像。
- 如权利要求67所述的方法,其特征在于,所述控制所述头戴式显示设备将所述图像在所述目标位置形成虚像,包括:获取所述头戴式显示设备中的显示组件与光学成像组件之间的第一距离;根据所述第一距离和所述目标位置,确定所述显示组件和/或所述光学成像组件的待移动距离;根据所述待移动距离生成第一控制指令,并向所述头戴式显示设备发送所述第一控制指令,所述第一控制指令用于控制所述显示组件和/或所述光学成像组件移动,将所述虚像调节至所述目标位置。
- 如权利要求68所述的方法,其特征在于,所述获取所述头戴式显示设备中的显示 组件与光学成像组件之间的第一距离,包括:接收所述头戴式显示设备中的虚像位置调节组件发送的所述光学成像组件和/或所述显示组件的位置;根据所述光学成像组件和/或所述显示组件的位置,确定所述第一距离。
- 如权利要求67所述的方法,其特征在于,所述控制所述头戴式显示设备将所述图像在所述目标位置形成虚像,包括:获取所述头戴式显示设备中的光学成像组件的第一焦距;根据所述第一焦距和所述目标位置,确定所述光学成像组件的待调节焦距;根据所述待调节焦距生成第二控制指令,并向所述头戴式显示设备发送所述第二控制指令,所述第二控制指令用于控制施加于所述光学成像组件的电压信号或电流信号,调节所述光学成像组件的焦距,将所述虚像调节至所述目标位置。
- 一种虚像的位置调节方法,其特征在于,应用于显示模组,所述显示模组包括显示组件、光学成像组件和虚像位置调节组件,所述显示组件用于显示图像,所述光学成像组件用于将所述图像形成虚像,所述虚像位置调节组件用于调节所述光学成像组件和/或所述显示组件;所述方法包括:获取所述显示组件显示的图像;获取所述图像对应的虚像的目标位置,所述虚像的目标位置与所述图像所属的预设场景类型有关;控制所述虚像位置调节组件调节所述光学成像组件和/或所述显示组件,将所述图像在所述目标位置形成虚像。
- 如权利要求71所述的方法,其特征在于,当所述图像属于不同的预设场景类型时,所述头戴式显示设备呈现虚像的目标位置不同。
- 一种虚像的位置调节方法,其特征在于,应用于显示模组,所述显示模组包括显示组件、光学成像组件和虚像位置调节组件,所述显示组件用于显示图像,所述光学成像组件用于将所述图像形成虚像,所述虚像位置调节组件用于调节所述光学成像组件和/或所述显示组件;所述方法包括:显示第一界面;当用户在所述第一界面中选择第一对象时,获取所述第一对象对应的虚像的目标位置,所述虚像的目标位置与所述第一对象所属的预设场景类型有关;针对选择所述第一对象后所触发所述显示组件显示的图像,控制所述虚像位置调节组件调节所述光学成像组件和/或所述显示组件,将所述图像在所述目标位置形成虚像。
- 如权利要求73所述的方法,其特征在于,当所述图像属于不同的预设场景类型时,所述头戴式显示设备呈现虚像的目标位置不同。
- 一种虚像的位置调节装置,其特征在于,应用于头戴式显示设备,所述虚像的位置调节装置包括获取模块和虚像形成模块;所述获取模块,用于获取所述头戴式显示设备显示的图像,以及所述图像对应的虚像的目标位置,所述虚像的目标位置与所述图像所属的预设场景类型有关;所述虚像形成模块,用于将所述图像在所述目标位置形成虚像。
- 如权利要求75所述的装置,其特征在于,当所述图像属于不同的预设场景类型时,所述虚像的位置调节装置呈现虚像的目标位置不同。
- 如权利要求76所述的装置,其特征在于,所述预设场景类型包括以下至少一项:办公场景类型、阅读场景类型、会议场景类型、交互式游戏场景类型或视频场景类型。
- 如权利要求76或77所述的装置,其特征在于,当所述图像所属的预设场景类型为所述办公场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.1~10]屈光度D;或者,当所述图像所属的预设场景类型为所述阅读场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.5~10]屈光度D;或者当所述图像所属的预设场景类型为所述会议场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.1~7.1]屈光度D;或者,当所述图像所属的预设场景类型为所述交互式游戏场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.5~7.5]屈光度D;或者,当所述图像所属的预设场景类型为所述视频场景类型时,所述头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7]屈光度D。
- 如权利要求75至78任一项所述的装置,其特征在于,所述图像所属的预设场景类型包括以下任一项:所述图像的内容所属的预设场景类型;或者,所述图像对应的对象所属的预设场景类型。
- 如权利要求75至79任一项所述的装置,其特征在于,所述获取模块,用于:获取所述头戴式显示设备显示的图像所属的第一预设场景类型;获取预设场景类型与虚像的位置的对应关系;根据所述预设场景类型与虚像的位置的对应关系,确定所述第一预设场景类型对应的所述目标位置。
- 如权利要求80所述的装置,其特征在于,所述获取模块,用于:接收终端设备发送的所述图像所属的所述第一预设场景类型;或者,确定所述图像所属的所述第一预设场景类型。
- 如权利要求75至79任一项所述的装置,其特征在于,所述获取模块,用于:接收终端设备发送的所述图像对应的虚像的所述目标位置。
- 如权利要求75至82任一项所述的装置,其特征在于,所述获取模块,用于:获取所述头戴式显示设备中的显示组件与学成像组件之间的第一距离;根据所述第一距离和所述目标位置,确定所述显示组件和/或所述光学成像组件的待移动距离;所述虚像形成模块,用于:根据所述待移动距离,驱动所述显示组件和/或所述光学成像组件移动,将所述虚像调节至所述目标位置。
- 如权利要求75至82任一项所述的装置,其特征在于,所述获取模块,用于:接收终端设备发送的所述头戴式显示设备中的显示组件和/或光学成像组件的待移动的距离;所述虚像形成模块,用于:根据所述待移动的距离,驱动所述显示组件和/或所述光学成像组件移动,将所述虚像调节至所述目标位置。
- 如权利要求75至82任一项所述的装置,其特征在于,所述获取模块,用于:确定所述头戴式显示设备中的变焦透镜的第一焦距;根据所述第一焦距和所述目标位置,确定所述变焦透镜的待调节焦距;所述虚像形成模块,用于:根据所述待调节焦距,改变施加于所述变焦透镜的电压信号或电流信号,将所述虚像调节至所述目标位置。
- 如权利要求75至82任一项所述的装置,其特征在于,所述获取模块,用于:接收终端设备发送的所述头戴式显示设备中的变焦透镜的待调节焦距;所述虚像形成模块,用于:根据所述待调节焦距,改变施加于所述变焦透镜的电压信号或电流信号,将所述虚像调节至所述目标位置。
- 如权利要求75至79任一项所述的装置,其特征在于,所述获取模块,用于:获取视力参数;获取所述头戴式显示设备显示的图像所属的第一预设场景类型;获取预设场景类型与虚像的位置的对应关系;根据所述视力参数、以及所述预设场景类型与虚像的位置的对应关系,确定所述第一预设场景类型对应的所述目标位置。
- 如权利要求75至87任一项所述的装置,其特征在于,所述虚像的目标位置与人眼之间的距离与人眼的双目会聚深度的差值的绝对值小于阈值。
- 如权利要求88所述的装置,其特征在于,所述阈值范围为[0屈光度D,1屈光度D]。
- 一种虚像的位置调节装置,其特征在于,包括确定模块、获取模块和控制模块:所述确定模块,用于确定图像所属的第一预设场景类型,所述图像用于头戴式显示设备显示;所述获取模块,用于获取预设场景类型与虚像的位置的对应关系;所述确定模块,还用于根据所述预设场景类型与虚像的位置的对应关系,确定所述第一预设场景类型对应的所述头戴式显示设备呈现虚像的目标位置,所述虚像的目标位置与所述图像所属的预设场景类型有关;所述控制模块,用于根据所述目标位置,控制所述头戴式显示设备将所述图像在所述目标位置形成虚像。
- 如权利要求90所述的装置,其特征在于,所述获取模块,用于:获取所述头戴式显示设备中的显示组件与光学成像组件之间的第一距离;所述确定模块,用于:根据所述第一距离和所述目标位置,确定所述显示组件和/或所述光学成像组件的待移动距离;所述控制模块,用于:根据所述待移动距离生成第一控制指令,并向所述头戴式显示设备发送所述第一控制指令,所述第一控制指令用于控制所述显示组件和/或所述光学成像组件移动,将所述虚像 调节至所述目标位置。
- 如权利要求91所述的装置,其特征在于,所述获取模块,用于:接收所述头戴式显示设备发送的所述光学成像组件和/或所述显示组件的位置;所述确定模块,用于:根据所述光学成像组件和/或所述显示组件的位置,确定所述第一距离。
- 如权利要求90所述的装置,其特征在于,所述获取模块,用于:获取所述头戴式显示设备中的变焦透镜的第一焦距;所述确定模块,用于:根据所述第一焦距和所述目标位置,确定所述变焦透镜的待调节焦距;所述控制模块,用于:根据所述待调节焦距生成第二控制指令,并向所述头戴式显示设备发送所述第二控制指令,所述第二控制指令用于控制施加于所述变焦透镜的电压信号或电流信号,调节所述变焦透镜的焦距,将所述虚像调节至所述目标位置。
- 一种虚像的位置调节装置,其特征在于,包括显示模块、获取模块和虚像形成模块;所述显示模块用于显示第一界面;当用户在所述第一界面中选择第一对象时,所述获取模块用于获取所述第一对象对应的虚像的目标位置,所述虚像的目标位置与所述第一对象所属的预设场景类型有关;针对选择所述第一对象后所触发显示的图像,所述虚像形成模块用于将所述图像在所述目标位置形成虚像。
- 如权利要求94所述的装置,其特征在于,当所述第一对象属于不同的预设场景类型时,所述头戴式显示设备呈现虚像的目标位置不同。
- 如权利要求95所述的装置,其特征在于,所述预设场景类型包括以下至少一项:办公场景类型、阅读场景类型、会议场景类型、交互式游戏场景类型或视频场景类型。
- 如权利要求95或96所述的装置,其特征在于,当所述图像所属的预设场景类型为所述办公场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.1~10]屈光度D;或者,当所述图像所属的预设场景类型为所述阅读场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.5~10]屈光度D;或者,当所述图像所属的预设场景类型为所述会议场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.1~7.1]屈光度D;或者,当所述图像所属的预设场景类型为所述交互式游戏场景类型时,所述头戴式显示设备呈现虚像的目标位置与所述光学成像组件之间的距离范围为[0.5~7.5]屈光度D;或者,当所述图像所属的预设场景类型为所述视频场景类型时,所述头戴式显示设备呈现虚像的目标位置与光学成像组件之间的距离范围为[0.1~7]屈光度D。
- 如权利要求94至97任一项所述的装置,其特征在于,所述第一对象为应用。
- 如权利要求94至98任一项所述的装置,其特征在于,所述获取模块,用于:获取所述第一对象所属的第二预设场景类型;获取预设场景类型与虚像的位置的对应关系;根据所述预设场景类型与虚像的位置的对应关系,确定所述第二预设场景类型对应的所述目标位置。
- 如权利要求94至98任一项所述的装置,其特征在于,所述获取模块,用于:接收终端设备发送的所述第一对象对应的虚像的所述目标位置。
- 如权利要求94至100任一项所述的装置,其特征在于,所述获取模块,用于:获取所述头戴式显示设备中的显示组件与光学成像组件之间的第一距离;根据所述第一距离和所述目标位置,确定所述显示组件和/或所述光学成像组件的待移动距离;所述虚像形成模块,用于:根据所述待移动距离,驱动所述显示组件和/或所述光学成像组件移动,将所述虚像调节至所述目标位置。
- 如权利要求94至100任一项所述的装置,其特征在于,所述获取模块,用于:接收终端设备发送的所述头戴式显示设备中的显示组件和/或光学成像组件的待移动的距离;所述虚像形成模块,用于:根据所述待移动的距离,驱动所述显示组件和/或所述光学成像组件移动,将所述虚像调节至所述目标位置。
- 如权利要求94至100任一项所述的装置,其特征在于,所述获取模块,用于:确定所述头戴式显示设备中的变焦透镜的第一焦距;根据所述第一焦距和所述目标位置,确定所述变焦透镜的待调节焦距;所述虚像形成模块,用于:根据所述待调节焦距,改变施加于所述变焦透镜的电压信号或电流信号,将所述虚像调节至所述目标位置。
- 如权利要求94至100任一项所述的装置,其特征在于,所述获取模块,用于:接收终端设备发送的所述头戴式显示设备中的变焦透镜的待调节焦距;所述虚像形成模块,用于:根据所述待调节焦距,改变施加于所述变焦透镜的电压信号或电流信号,将所述虚像调节至所述目标位置。
- 如权利要求94至98任一项所述的装置,其特征在于,所述获取模块,用于:获取视力参数;获取所述第一对象所属的第二预设场景类型;获取预设场景类型与虚像的位置的对应关系;根据所述视力参数、以及所述预设场景类型与虚像的位置的对应关系,确定所述第二预设场景类型对应的所述目标位置。
- 一种虚像的位置调节装置,其特征在于,应用于终端设备,包括获取模块、确定模块和控制模块:所述获取模块用于获取用户在头戴式显示设备显示的第一界面中选择的第一对象、所述第一对象所属的第二预设场景类型、以及预设场景类型与虚像的位置的对应关系;所述确定模块用于根据所述预设场景类型与虚像的位置的对应关系,确定所述第二预设场景类型对应的所述头戴式显示设备呈现虚像的目标位置,所述虚像的目标位置与所述第一对象所属的预设场景类型有关;所述控制模块用于根据所述目标位置,控制所述头戴式显示设备将所述图像在所述目 标位置形成虚像。
- 如权利要求106所述的装置,其特征在于,所述获取模块,用于:获取所述头戴式显示设备中的显示组件与光学成像组件之间的第一距离;所述确定模块,用于:根据所述第一距离和所述目标位置,确定所述显示组件和/或所述光学成像组件的待移动距离;所述控制模块,用于:根据所述待移动距离生成第一控制指令,并向所述头戴式显示设备发送所述第一控制指令,所述第一控制指令用于控制所述显示组件和/或所述光学成像组件移动,将所述虚像调节至所述目标位置。
- 如权利要求107所述的装置,其特征在于,所述获取模块,用于:接收所述头戴式显示设备发送的所述光学成像组件和/或所述显示组件的位置;所述确定模块,用于:根据所述光学成像组件和/或所述显示组件的位置,确定所述第一距离。
- 如权利要求106所述的装置,其特征在于,所述获取模块,用于:获取所述头戴式显示设备中的变焦透镜的第一焦距;所述确定模块,用于:根据所述第一焦距和所述目标位置,确定所述变焦透镜的待调节焦距;所述控制模块,用于:根据所述待调节焦距生成第二控制指令,并向所述头戴式显示设备发送所述第二控制指令,所述第二控制指令用于控制施加于所述变焦透镜的电压信号或电流信号,调节所述变焦透镜的焦距,将所述虚像调节至所述目标位置。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21909280.6A EP4258039A4 (en) | 2020-12-24 | 2021-12-17 | DISPLAY MODULE AND METHOD AND DEVICE FOR ADJUSTING THE POSITION OF A VIRTUAL IMAGE |
US18/340,195 US20230333596A1 (en) | 2020-12-24 | 2023-06-23 | Display Module, and Virtual Image Location Adjustment Method and Apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011554651.7 | 2020-12-24 | ||
CN202011554651.7A CN114675417A (zh) | 2020-12-24 | 2020-12-24 | 一种显示模组、虚像的位置调节方法及装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/340,195 Continuation US20230333596A1 (en) | 2020-12-24 | 2023-06-23 | Display Module, and Virtual Image Location Adjustment Method and Apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022135284A1 true WO2022135284A1 (zh) | 2022-06-30 |
Family
ID=82070519
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/139033 WO2022135284A1 (zh) | 2020-12-24 | 2021-12-17 | 一种显示模组、虚像的位置调节方法及装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230333596A1 (zh) |
EP (1) | EP4258039A4 (zh) |
CN (1) | CN114675417A (zh) |
WO (1) | WO2022135284A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024058507A1 (ko) * | 2022-09-15 | 2024-03-21 | 삼성전자주식회사 | 실제 공간과 가상 공간 간의 차이를 최소화하는 전자 장치 및 그 제조 방법 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114994926B (zh) * | 2022-07-18 | 2022-11-22 | 京东方艺云(杭州)科技有限公司 | 显示调节装置和显示系统 |
CN114942523A (zh) * | 2022-07-26 | 2022-08-26 | 歌尔光学科技有限公司 | 光学模组以及头戴显示设备 |
CN115494649B (zh) * | 2022-11-16 | 2023-01-31 | 深圳惠牛科技有限公司 | 增强现实显示装置屈光度调节方法及增强现实显示装置 |
TWI830516B (zh) * | 2022-11-30 | 2024-01-21 | 新鉅科技股份有限公司 | 光學透鏡組及頭戴式電子裝置 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105068648A (zh) * | 2015-08-03 | 2015-11-18 | 众景视界(北京)科技有限公司 | 头戴式智能交互系统 |
CN108124509A (zh) * | 2017-12-08 | 2018-06-05 | 深圳前海达闼云端智能科技有限公司 | 图像显示方法、穿戴式智能设备及存储介质 |
CN108663803A (zh) * | 2017-03-30 | 2018-10-16 | 腾讯科技(深圳)有限公司 | 虚拟现实眼镜、镜筒调节方法及装置 |
CN108700745A (zh) * | 2016-12-26 | 2018-10-23 | 华为技术有限公司 | 一种位置调整方法及终端 |
CN109188688A (zh) * | 2018-11-14 | 2019-01-11 | 上海交通大学 | 基于衍射光元件的近眼显示装置 |
CN110543021A (zh) * | 2019-07-31 | 2019-12-06 | 华为技术有限公司 | 一种显示系统、vr模块以及可穿戴设备 |
US20200166757A1 (en) * | 2018-11-26 | 2020-05-28 | Jvckenwood Corporation | Head mounted display, head mounted display system, and setting method for head mounted display |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130241805A1 (en) * | 2012-03-15 | 2013-09-19 | Google Inc. | Using Convergence Angle to Select Among Different UI Elements |
WO2018156523A1 (en) * | 2017-02-21 | 2018-08-30 | Oculus Vr, Llc | Focus adjusting multiplanar head mounted display |
CN111830714B (zh) * | 2020-07-24 | 2024-03-29 | 闪耀现实(无锡)科技有限公司 | 图像显示控制方法、图像显示控制装置及头戴式显示设备 |
-
2020
- 2020-12-24 CN CN202011554651.7A patent/CN114675417A/zh active Pending
-
2021
- 2021-12-17 EP EP21909280.6A patent/EP4258039A4/en active Pending
- 2021-12-17 WO PCT/CN2021/139033 patent/WO2022135284A1/zh unknown
-
2023
- 2023-06-23 US US18/340,195 patent/US20230333596A1/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105068648A (zh) * | 2015-08-03 | 2015-11-18 | 众景视界(北京)科技有限公司 | 头戴式智能交互系统 |
CN108700745A (zh) * | 2016-12-26 | 2018-10-23 | 华为技术有限公司 | 一种位置调整方法及终端 |
CN108663803A (zh) * | 2017-03-30 | 2018-10-16 | 腾讯科技(深圳)有限公司 | 虚拟现实眼镜、镜筒调节方法及装置 |
CN108124509A (zh) * | 2017-12-08 | 2018-06-05 | 深圳前海达闼云端智能科技有限公司 | 图像显示方法、穿戴式智能设备及存储介质 |
CN109188688A (zh) * | 2018-11-14 | 2019-01-11 | 上海交通大学 | 基于衍射光元件的近眼显示装置 |
US20200166757A1 (en) * | 2018-11-26 | 2020-05-28 | Jvckenwood Corporation | Head mounted display, head mounted display system, and setting method for head mounted display |
CN110543021A (zh) * | 2019-07-31 | 2019-12-06 | 华为技术有限公司 | 一种显示系统、vr模块以及可穿戴设备 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4258039A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024058507A1 (ko) * | 2022-09-15 | 2024-03-21 | 삼성전자주식회사 | 실제 공간과 가상 공간 간의 차이를 최소화하는 전자 장치 및 그 제조 방법 |
Also Published As
Publication number | Publication date |
---|---|
CN114675417A (zh) | 2022-06-28 |
US20230333596A1 (en) | 2023-10-19 |
EP4258039A4 (en) | 2024-04-10 |
EP4258039A1 (en) | 2023-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022135284A1 (zh) | 一种显示模组、虚像的位置调节方法及装置 | |
US9298012B2 (en) | Eyebox adjustment for interpupillary distance | |
US9927614B2 (en) | Augmented reality display system with variable focus | |
US20210041705A1 (en) | Electronic Device With A Tunable Lens | |
US20160349509A1 (en) | Mixed-reality headset | |
US11221479B2 (en) | Varifocal optical assembly providing astigmatism compensation | |
WO2017027139A1 (en) | Placement of a computer generated display with focal plane at finite distance using optical devices and a seethrough head-mounted display incorporating the same | |
US20210311314A1 (en) | Wearable apparatus and unmanned aerial vehicle system | |
US11726328B2 (en) | Accommodation adjustable and magnification corrective optical system | |
US11675192B2 (en) | Hybrid coupling diffractive optical element | |
US11598964B2 (en) | Freeform varifocal optical assembly | |
JP2003241100A (ja) | 偏心光学系 | |
EP4312064A1 (en) | Reflective fresnel folded optic display | |
GB2557942A (en) | Apparatus to achieve compact head mounted display with reflectors and eyepiece element | |
US20230084541A1 (en) | Compact imaging optics using spatially located, free form optical components for distortion compensation and image clarity enhancement | |
WO2023158742A1 (en) | Display systems with waveguide configuration to mitigate rainbow effect | |
US20240369836A1 (en) | Achromatic lens including fresnel optical element for near eye display | |
US20230360567A1 (en) | Virtual reality display system | |
US20240353687A1 (en) | Catadioptric lens for near eye display | |
US20230041406A1 (en) | Tunable lens with deformable reflector | |
US20240118535A1 (en) | Tunable lens with translatable reflector | |
CN115590733A (zh) | 一种视力训练方法及装置 | |
WO2023219925A1 (en) | Virtual reality display system | |
WO2023015021A1 (en) | Tunable lens with deformable reflector | |
CN114245092A (zh) | 多深度近眼显示方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21909280 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021909280 Country of ref document: EP Effective date: 20230707 |