WO2024051476A1 - 头戴式虚拟现实设备 - Google Patents

头戴式虚拟现实设备 Download PDF

Info

Publication number
WO2024051476A1
WO2024051476A1 PCT/CN2023/113818 CN2023113818W WO2024051476A1 WO 2024051476 A1 WO2024051476 A1 WO 2024051476A1 CN 2023113818 W CN2023113818 W CN 2023113818W WO 2024051476 A1 WO2024051476 A1 WO 2024051476A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
user
camera
virtual reality
head
Prior art date
Application number
PCT/CN2023/113818
Other languages
English (en)
French (fr)
Inventor
王强
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2024051476A1 publication Critical patent/WO2024051476A1/zh

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays

Definitions

  • the embodiments of the present disclosure relate to the field of virtual reality technology, and in particular, to a head-mounted virtual reality device.
  • Head-mounted virtual reality equipment is a product that combines simulation technology with computer graphics, human-computer interface technology, multimedia technology, sensing technology, network technology and other technologies. It is a brand-new virtual reality device created with the help of computers and the latest sensor technology. Human-computer interaction means.
  • the head-mounted virtual reality device includes an eye tracking device, thereby enabling the head-mounted virtual reality device to determine the user's line of sight direction and inter-pupillary distance based on the user's eyeball position.
  • the eye tracking device in the related art has a problem of low accuracy.
  • Embodiments of the present disclosure provide a head-mounted virtual reality device to solve the technical problem of low accuracy of eye tracking devices in related technologies.
  • Embodiments of the present disclosure provide a head-mounted virtual reality device, including an eye movement recognition component and a processor.
  • the eye movement recognition component includes a light source component, a first camera and a second camera;
  • the light source component is configured to project a plurality of strips of light to the user's eyes when the head-mounted virtual reality device is used, and at least part of the strips of light is projected to the user's iris. ;
  • the first camera is configured to take a first eye picture of the user when the head-mounted virtual reality device is used, and the first eye picture includes the strip of light in the user's eyes.
  • the second camera is configured to take a second eye picture of the user when the head-mounted virtual reality device is used, and the second eye picture includes the strip of light in the user's eyes.
  • the processor is connected to the first camera and the second camera respectively, and the processor is configured to determine the eye with depth information based on the first eye picture and the second eye picture. Depth map; determine the user's iris center coordinates based on the eye depth map.
  • the second eye picture and the first eye picture have an overlapping area, and the overlapping area includes a picture formed by at least part of the iris;
  • Determining an eye depth map with depth information according to the first eye picture and the second eye picture includes:
  • the third eye depth map is fused with at least one of the first eye depth map and the second eye depth map to obtain an eye depth map with depth information.
  • it also includes two lens barrel assemblies, the two lens barrel assemblies respectively correspond to the two eyes of the user, and when the head-mounted virtual reality device is used, the two lens barrel assemblies The lens barrel assembly displays a virtual scene to the user;
  • One of the lens barrel components corresponds to one of the eye movement recognition components.
  • the first camera and the light source component are arranged on the lens barrel component.
  • the second camera is disposed on the upper part of the lens barrel assembly.
  • it also includes a housing, two of the lens barrel assemblies are disposed in the housing, and a spacer for accommodating the user's wearing
  • the first space of the glasses, the first side of the housing is the side of the housing facing the user's eyes when the head-mounted virtual reality device is worn on the user's eyes;
  • the first camera is located at the lower part of one end of the first space along the first direction and is disposed on the housing, and the third camera One direction is the direction of the line connecting the centers of the two lens barrel assemblies.
  • the second camera is located on a side close to the other lens barrel assembly and is disposed on the on the casing.
  • the light source assembly is disposed directly below the central axis of the lens barrel assembly, and is disposed on the on the casing.
  • the two eye movement recognition components respectively correspond to the two eyes of the user
  • the processor obtains the center coordinates of the iris of the user's two eyes based on the information fed back by the two eye movement recognition components;
  • the processor obtains the interpupillary distance according to the center coordinates of the iris of the user's two eyes.
  • the light source component includes a vertical cavity surface emitting laser and a diffractive optical element.
  • the diffractive optical element has a plurality of vertically and horizontally interlaced lines.
  • the light emitted by the vertical cavity surface emitting laser passes through the multiple vertically and horizontally interlaced lines of the diffractive optical element and is emitted to the user's eyes, and forms a plurality of vertically and horizontally interlaced lines on the user's eyes. The projected area of the line.
  • the projection area at least covers a square area with a length and width of 28 mm.
  • the distance when the light emitted from the light source assembly strikes the user's eyeball is 25mm-27mm;
  • the diffractive optical element has more than 21 longitudinal lines and transverse lines, and the distance between two adjacent longitudinal lines and two adjacent transverse lines is 3.5-5mm.
  • the vertical cavity surface emitting laser is pulse driven, with a pulse width of 2 microseconds, a pulse duty cycle of 0.06%, and a frequency of 30 Hz.
  • the field of view of the first camera and the second camera is 90°
  • the depth of field is 20mm-40mm
  • the resolution is greater than 400 ⁇ 400
  • the resolution is 56lp/mm. frequency
  • the full field modulation transfer function value is greater than 0.5.
  • Figure 1 is a schematic diagram of a head-mounted virtual reality device according to an embodiment of the present disclosure
  • Figure 2 is a light diagram when the first camera is located at the first position
  • Figure 3 is a front view of a user wearing a head-mounted virtual reality device provided by an embodiment of the present disclosure.
  • 100. Eye movement recognition component 110. Light source component; 120. First camera; 130. Second camera; 200. Lens tube assembly; 210. Display screen; 220. Lens barrel; 230. Convex lens; 300. Glasses; 310. Mirror frame; 400. Shell.
  • the eye tracking device of the head-mounted virtual reality device can determine the user's line of sight direction and inter-pupillary distance according to the position of the user's eyeballs.
  • the eye tracking device in the related art has complicated calibration. High degree of accuracy and low calibration accuracy.
  • the inventor found through research that if the eye tracking device of the head-mounted virtual reality device includes an infrared camera and multiple infrared emitting diodes, the multiple infrared emitting diodes and the infrared camera cooperate to realize the eye movement recognition function of the head-mounted virtual reality device.
  • the method is: when the head-mounted virtual reality device is worn on the user's eyes, multiple infrared emitting diodes emit light to the user's eyeballs to form reflected light spots on the user's eyeballs, and the infrared camera captures the reflected light spots formed on the user's eyeballs. Then the pupil center position is calculated according to the optical solution method, and then the user's interpupillary distance is obtained. The head-mounted virtual reality device adjusts its own IPD value according to the user's interpupillary distance.
  • the infrared camera is usually installed on the casing of the head-mounted virtual reality device, it is limited by its shooting angle.
  • embodiments of the present disclosure identify the iris position through a solution of binocular cameras and 3D structured light to obtain the user's interpupillary distance.
  • this method The measurement accuracy of the solution is significantly improved, and the interpupillary distance can be obtained at a set frequency, improving the user experience.
  • Figure 1 is a schematic diagram of a head-mounted virtual reality device according to an embodiment of the present disclosure
  • Figure 2 is a light diagram when the first camera is located at the first position
  • Figure 3 is a view of a user wearing the head-mounted virtual reality device provided by an embodiment of the present disclosure. view.
  • the head-mounted virtual reality device includes an eye movement recognition component 100 and a processor.
  • the eye movement recognition component 100 includes a light source component 110, a first camera 120 and a second camera 130.
  • the light source component 110 The light source assembly 110 is configured to project multiple strips of light to the user's eyes when the head-mounted virtual reality device is used, and at least part of the strip-shaped light is projected to the user's iris;
  • the first camera 120 is configured to When the virtual reality device is used, the first camera 120 takes a first eye picture of the user, and the first eye picture includes a reflection pattern formed by strips of light on the user's eyes;
  • the second camera 130 is configured to capture the user's eyes in the head-mounted virtual reality device.
  • the second camera 130 takes a second eye picture of the user.
  • the second eye picture includes a reflection pattern formed by strips of light on the user's eyes;
  • the processor communicates with the first camera 120 and the second camera 130 respectively. Connection, the processor is configured to determine an eye depth map with depth information based on the first eye picture and the second eye picture; and determine the user's iris center coordinates based on the eye depth map.
  • the head-mounted virtual reality device provided by the embodiment of the present disclosure is based on the difference between the depth of the iris edge and the depth of the sclera.
  • the position coordinates of part of the iris edge are obtained, and the user is determined based on the position coordinates of the iris edge. Iris center coordinates.
  • the head-mounted virtual reality device determines the eye depth map with depth information based on the first eye picture and the second eye picture in the following manner: Way:
  • a first eye depth map having depth information is determined based on the first eye picture, and/or a second eye depth map having depth information is determined based on the second eye picture. That is to say, the first camera 120 cooperates with the light source component 110, and the first camera 120 takes a first eye picture with strip-shaped light emitted by the light source component 110.
  • the first eye picture is processed by the processor to obtain a depth-sensitive image.
  • the first eye depth map of the information, and/or the second camera 130 cooperates with the light source component 110, and the second camera 130 captures a second eye picture with strip-shaped light emitted by the light source component 110.
  • the image is processed by the processor to obtain a second eye depth map with depth information.
  • a third eye depth map with depth information is determined. That is to say, the first eye picture and the second eye picture taken by the first camera 120 are processed by the processor to obtain another depth map with depth information, which is the third eye depth map.
  • the third eye depth map is fused with at least one of the first eye depth map and the second eye depth map to obtain an eye depth map with depth information.
  • the resolution of the eye depth map obtained after fusing two or three depth maps is greater than the resolution of a single depth map. After fusing two or three depth maps, the user's iris center coordinates are obtained based on the fused depth map.
  • the first view of the strip-shaped light emitted by the light source assembly is obtained through the cooperation of the light source assembly, the first camera and the second camera.
  • the first eye picture and the second eye picture the processor determines the eye depth with depth information based on the first eye picture and the second eye picture.
  • Figure and then determine the user's iris center coordinates based on the eye depth map. This method of obtaining the user's iris center coordinates greatly improves the accuracy of the head-mounted virtual reality device in determining the user's iris center coordinates and improves the user's experience.
  • the second eye picture and the first eye picture have an overlapping area, and the overlapping area includes at least part of the iris formation picture of. That is to say, the eye pictures taken by the first camera 120 have pictures in which at least part of the iris is formed, the eye pictures taken by the second camera 130 have pictures in which at least part of the iris is formed, and the eye pictures taken by the first camera 120 The iris formation picture and the iris formation picture in the eye picture taken by the second camera 130 have overlapping portions.
  • the above-mentioned method of determining the eye depth map with depth information based on the first eye picture and the second eye picture includes the following three situations:
  • a first eye depth map with depth information is determined.
  • the method of obtaining the first eye depth map with depth information is based on the principle of 3D structured light, based on the strip-shaped light captured by the first camera 120 and emitted by the light source assembly 110
  • the first eye image obtains the first eye depth map with depth information. That is to say, the head-mounted virtual reality device obtains the first eye depth map with depth information through the cooperation of the first camera 120 and the light source component 110, based on the principle of 3D structured light, and through analysis by the processor.
  • a third eye depth map with depth information is determined.
  • the method of obtaining the third eye depth map with depth information is based on the principle of a binocular camera, based on the first eye picture taken by the first camera 120 and the second camera 130 Take the second eye picture and obtain the third eye depth map with depth information. That is to say, the head-mounted virtual reality device obtains the third eye depth map with depth information through the cooperation of the first camera 120 and the second camera 130 based on the principle of the binocular camera and through analysis by the processor.
  • the third eye depth map and the first eye depth map are fused to obtain an eye depth map with depth information.
  • the method of fusing the third eye depth map and the first eye depth map may be to fuse the third eye depth map and the first eye depth map in an image feature fusion manner. , to obtain a depth map whose resolution is greater than the resolution of the third eye depth map and the first eye depth map. After the third eye depth map and the first eye depth map are fused, according to the fusion The subsequent depth map obtains the coordinates of the user's iris center.
  • the first case in the above method of determining the eye depth map with depth information based on the first eye picture and the second eye picture is: based on the principle of 3D structured light, obtaining the third eye depth map with depth information.
  • One eye depth map based on the principle of binocular cameras, obtains the third eye depth map with depth information, the third eye depth map and the first eye depth map
  • the depth map forms a depth map with higher resolution, which can increase the accuracy of obtaining the iris center coordinates.
  • the head-mounted virtual reality device's solution to obtain the interpupillary distance can increase the accuracy by more than 10 times, and the head-mounted virtual reality device can obtain the interpupillary distance by more than 10 times.
  • the virtual reality device can obtain the interpupillary distance according to the set frequency to improve the user experience.
  • a second eye depth map with depth information is determined.
  • the method of obtaining the second eye depth map with depth information is the same as the method of obtaining the first eye depth map with depth information, and will not be described again here.
  • a third eye depth map with depth information is determined. This process is the same as the above-mentioned process of obtaining the third eye depth map with depth information, and will not be described again here.
  • the third eye depth map and the second eye depth map are fused to obtain an eye depth map with depth information.
  • the method of fusing the third eye depth map and the second eye depth map is the same as the above-mentioned method of fusing the third eye depth map and the first eye depth map.
  • the third eye depth map The resolution of the depth map obtained after the fusion of the image and the second eye depth map is greater than the resolution of the third eye depth map and the second eye depth map.
  • the third eye depth map and the second eye depth map are fused. Finally, the user's iris center coordinates are obtained based on the fused depth map.
  • the second case in the above method of determining the eye depth map with depth information based on the first eye picture and the second eye picture is: based on the principle of 3D structured light, obtaining the third eye depth map with depth information.
  • the second eye depth map is based on the principle of a binocular camera to obtain a third eye depth map with depth information.
  • the third eye depth map and the second eye depth map are fused to form a depth map with a higher resolution. , this method can also increase the accuracy of obtaining the iris center coordinates.
  • a first eye depth map with depth information is determined
  • a second eye depth map with depth information is determined. That is, the head-mounted virtual reality device obtains the first eye depth map with depth information through the cooperation of the first camera 120 and the light source component 110 based on the principle of 3D structured light and through analysis by the processor; through the second camera The cooperation between 130 and the light source component 110 is based on the principle of 3D structured light and through analysis by the processor, a second eye depth map with depth information is obtained.
  • a third eye depth map with depth information is determined. This process is the same as the above-mentioned process of obtaining the third eye depth map with depth information, and will not be described again here.
  • the third eye depth map is fused with the first eye depth map and the second eye depth map to obtain an eye depth map with depth information.
  • the method of fusing the first eye depth map, the second eye depth map, and the third eye depth map may be to fuse the first eye depth map in an image feature fusion manner. , the second eye depth map and the third eye depth map to obtain a depth map whose resolution is greater than the first eye depth map, the second eye depth map and the third eye depth map. Resolution, after the first eye depth map, the second eye depth map and the third eye depth map are fused, the user's iris center coordinates are obtained based on the fused depth map. Compared with the method of fusing two depth maps, this method obtains the iris center coordinates with higher accuracy.
  • the method of obtaining the user's iris center coordinates can also be to compare the first eye depth map and the second eye depth map, Select a higher-definition picture, and fuse the third eye depth map and the compared higher-definition picture using image feature fusion to obtain a depth map.
  • the second case in the above method of determining the eye depth map with depth information based on the first eye picture and the second eye picture is: based on the principle of 3D structured light, obtaining the third eye depth map with depth information.
  • the first eye depth map and the second eye depth map based on the principle of binocular cameras, obtain the third eye depth map with depth information, fuse the three depth maps or fuse the third eye depth map and the first eye
  • the depth map with a higher resolution among the depth map and the second eye depth map is used to form a depth map with a higher resolution.
  • This method can also increase the accuracy of obtaining the iris center coordinates.
  • the head-mounted virtual reality device also includes two lens barrel assemblies 200.
  • the two lens barrel assemblies 200 respectively correspond to the two eyes of the user, and when the head-mounted virtual reality device is used, the two lens barrel assemblies 200
  • the lens barrel assembly 200 displays a virtual scene to the user;
  • the head-mounted virtual reality device also includes two lens barrel assemblies 200.
  • the two lens barrel assemblies 200 respectively correspond to the two eyes of the user.
  • the two lens barrel assemblies 200 face the user. Show virtual scenes. That is to say, one of the lens barrel components 200 corresponds to the user's left eye to display a virtual mirror image to the user's left eye, and the other lens barrel component 200 corresponds to the user's right eye to display a virtual image to the user's right eye. mirror.
  • one lens barrel assembly 200 corresponds to one eye movement recognition component 100.
  • the light source assembly 110 and the first camera 120 are both disposed in the lens barrel assembly 200.
  • the second camera 130 is disposed on the upper part of the lens barrel assembly 200 .
  • the two eye movement recognition components 100 respectively correspond to the two eyes of the user, and the processor obtains the centers of the irises of the user's two eyes respectively based on the information fed back by the two eye movement recognition components 100 coordinates; then the processor obtains the interpupillary distance based on the center coordinates of the iris of the user's two eyes.
  • the information fed back by the two eye movement recognition components 100 is the first glance of the corresponding eyes captured by the first cameras 120 of the two eye movement recognition components 100 respectively.
  • the second eye pictures of the corresponding eyes are taken by the second cameras 130 of the two eye movement recognition components 100 respectively.
  • the head-mounted virtual reality device provided by the embodiment of the present disclosure has an eye movement recognition function.
  • the head-mounted virtual reality device provided by the embodiment of the present disclosure obtains the user's interpupillary distance by analyzing the user's eye movement, and then Adjusts its own IPD value based on the user's interpupillary distance.
  • the two lens barrel assemblies 200 are respectively a first lens barrel assembly and a second lens barrel assembly.
  • the eye movement recognition component 100 corresponding to the first lens barrel assembly is the first eye movement recognition component
  • the eye movement recognition component 100 of the component is a second eye movement recognition component.
  • the first camera 120 and the light source component 110 of the first eye movement recognition component are arranged at the lower part of the first lens barrel component.
  • the second camera of the first eye movement recognition component 130 is provided on the upper part of the first lens barrel assembly.
  • the first camera 120 and the light source component 110 of the second eye movement recognition component are disposed at the lower part of the second lens barrel component, and the second camera 130 of the second eye movement recognition component is disposed at the upper part of the second lens barrel component.
  • the first eye movement recognition component and the second eye movement recognition component are arranged in mirror images relative to the first plane.
  • the first camera 120 is arranged at the lower part and the second camera 130 is arranged at the upper part.
  • the viewing angles captured by the two cameras are different, which is beneficial to eye movement recognition. accuracy, and the first eye movement recognition component and the second eye movement recognition component are mirrored with respect to the first plane, so that the angle at which the first camera 120 of the first eye movement recognition component captures the first eye picture is consistent with the second eye movement recognition component.
  • the angle at which the first camera 120 of the component takes the first eye picture is the same, and the angle at which the second camera 130 of the first eye movement recognition component takes the second eye picture is the same as the angle at which the second camera 130 of the second eye movement recognition component takes the second eye picture.
  • the angle of the two eye pictures is the same, which is beneficial to the accuracy of the processor's analysis results, that is, it can make the accuracy of eye movement recognition higher and the calculated IPD value more accurate.
  • the lens barrel assembly 200 includes a lens barrel 220 and a display screen 210 and a convex lens 230 disposed on both sides of the lens barrel 220 along the axial direction of the lens barrel 220 .
  • the display screen 210 is disposed on the lens barrel 220 away from the user's eyes.
  • the convex lens 230 is disposed on the side of the lens barrel 220 facing the user's eyes.
  • the head-mounted virtual reality device also includes a housing 400.
  • Two lens barrel assemblies 200 are disposed in the housing 400.
  • the first space of the glasses 300 and the first side of the housing 400 are the side of the housing 400 facing the user's eyes when the head-mounted virtual reality device is worn on the user's eyes. That is to say, the head-mounted virtual reality device provided by the embodiment of the present disclosure can be adapted to users wearing glasses 300 and improve the experience of users wearing glasses 300 .
  • the glasses 300 worn by the user can be myopia glasses 300 , hyperopia glasses 300 , or reading glasses.
  • the first camera 120 is an eye-tracking (Eye-Tracking, ET for short) camera
  • the second camera 130 is an eye-tracking (Face-Tracking, FT for short) camera.
  • the first camera 120 is located at the third A space is provided at the lower part of one end of the housing 400 along a first direction, and the first direction is the direction of the line connecting the centers of the two lens barrel assemblies 200 . That is to say, the first camera 120 of the first eye movement recognition component is located at the lower part of one end of the first space along the first direction, and is disposed on the housing 400 .
  • the first camera 120 of the second eye movement recognition component is located at the lower part of the other end of the first space along the first direction, and is arranged on the housing 400. That is, as shown in Figure 3, when the head-mounted virtual reality device is worn on the user's eyes, the first eye movement recognition component corresponds to the user's right eye, and the first camera 120 of the first eye movement recognition component is located on the user's glasses. 300 is the lower part of the right edge of the spectacle frame 310 and is arranged on the housing 400.
  • This arrangement enables the first camera 120 to take a picture of the user's right eye, and the edge of the glasses 300 worn by the user will not block the first camera 120 from taking a picture of the user.
  • the perspective of eye pictures improves the accuracy of eye movement recognition in head-mounted virtual reality devices.
  • FIG. 2 is a light diagram when the first camera 120 is located at the first position, where the first position is the lower left part of the left lens of the user's glasses 300 when the head-mounted virtual reality device is worn on the user's eyes. and is located at the edge of the housing 400 .
  • the second camera 130 is located on a side close to the other lens barrel assembly 200 and is disposed on the housing.
  • the light source assembly 110 is disposed just below the central axis of the lens barrel assembly 200 and is disposed on the housing 400 . That is to say, the light source component 110 of the first eye movement recognition component is disposed at the lower part of the first lens barrel component and is located directly below the central axis of the first lens barrel component. Correspondingly, based on the mirror image arrangement of the first eye movement recognition component and the second eye movement recognition component relative to the first plane, the light source component 110 of the second eye movement recognition component is disposed at the lower part of the second lens barrel component and is located on the second mirror. Just below the central axis of the barrel assembly. This arrangement enables the light source assembly 110 to project structured light stripes with small distortion on the user's eyeballs, which is beneficial to improving the accuracy of interpupillary distance measurement.
  • the light source assembly 110 includes a vertical cavity surface emitting laser and a diffractive optical element.
  • the diffractive optical element has a plurality of vertically and horizontally interlaced lines.
  • the head-mounted virtual reality device is worn on the user's eyes.
  • the light emitted by the vertical cavity surface emitting laser passes through the multiple vertically and horizontally interlaced lines of the diffractive optical element, it is emitted to the user's eyes, and forms a projection area with multiple vertically and horizontally interlaced lines on the user's eyes. That is to say, A grid-like pattern is provided on the light exit surface of the diffractive optical element.
  • the light emitted by the vertical cavity surface emitting laser irradiates the diffractive optical element. After passing through the multiple vertically and horizontally interlaced lines of the diffractive optical element, it is emitted to the user's eyes, and is emitted to the user's eyes.
  • the user's eyes form grid-shaped reflective spots.
  • the projection area at least covers a square area with a length and width of 28 mm, and a square area with a length and width of 28 mm.
  • the user's eyes can be covered, so that the second camera 130 disposed on the upper part of the lens barrel assembly 200 can capture eye pictures with grid-shaped reflected light spots.
  • the light source assembly 110 In order to enable the light source assembly 110 and the first camera 120 to cooperate to obtain a more accurate first eye depth map, and in order to enable the light source assembly 110 and the second camera 130 to cooperate to obtain a more accurate second eye depth map, the light source assembly 110
  • the distance when the emitted light hits the user's eyeball is 25mm-27mm.
  • the vertical lines and horizontal lines set by the diffractive optical element are both greater than 21.
  • the distance between two adjacent vertical lines and two adjacent horizontal lines is 3.5-5mm. .
  • the vertical cavity surface emitting laser is pulse driven, with a pulse width of 2 microseconds, a pulse duty cycle of 0.06%, and a frequency of 30 Hz. This setup can reduce the power consumption of vertical cavity surface emitting lasers.
  • the fields of view of the first camera 120 and the second camera 130 The angles are all 90°.
  • the depth of field of the first camera 120 and the second camera 130 is 20mm-40mm
  • the resolution is greater than 400 ⁇ 400
  • the resolution is at a frequency of 56lp/mm.
  • the full field of view modulation transfer function value is greater than 0.5. This setting enables the first camera 120 and the second camera 130 to capture better eye images that meet the requirements, and to cooperate with the light source assembly 110 to obtain depth images with depth information.
  • the relationship between the minimum line width and depth of field that can be recognized by the first camera 120 and the second camera 130 is as shown in Table 1:
  • Table 1 The relationship between the minimum line width and depth of field that can be recognized by the first camera and the second camera

Abstract

本公开提供一种头戴式虚拟现实设备,该头戴式虚拟现实设备包括眼动识别组件和处理器,眼动识别组件包括光源组件、第一相机和第二相机。该头戴式虚拟现实设备通过光源组件、第一相机和第二相机的配合,获取具有光源组件发射的条形的光线的第一眼部图片和第二眼部图片,处理器根据第一眼部图片和第二眼部图片确定具有深度信息的眼部深度图,然后根据眼部深度图确定用户虹膜中心坐标,以此方式获取用户虹膜中心坐标的方式。

Description

头戴式虚拟现实设备
相关申请的交叉引用
本公开要求于2022年09月07日提交中国专利局、申请号为202211091847.6、发明名称为“头戴式虚拟现实设备”的中国专利申请的优先权,其全部内容通过引用并入本文。
技术领域
本公开实施例涉及虚拟现实技术领域,尤其涉及一种头戴式虚拟现实设备。
背景技术
随着虚拟现实(virtual reality,简称为VR)技术的发展以及头戴式虚拟现实设备民用化程度的不断提高,人们可以佩戴头戴式虚拟现实设备体验虚拟场景,享受沉浸式体验,极大的丰富了人们的生活内容和生活品质。
头戴式虚拟现实设备是利用仿真技术与计算机图形学、人机接口技术、多媒体技术、传感技术、网络技术等多种技术集合的产品,是借助计算机及最新传感器技术创造的一种崭新的人机交互手段。在相关技术中,头戴式虚拟现实设备包括眼球追踪装置,进而以使得头戴式虚拟现实设备能够根据用户的眼球位置确定用户的视线方向和瞳孔间距。
然而,相关技术中的眼球追踪装置存在精度低的问题。
发明内容
本公开实施例提供一种头戴式虚拟现实设备,用以解决相关技术中的眼球追踪装置存在精度低的技术问题。
本公开实施例为解决上述技术问题提供如下技术方案:
本公开实施例提供了一种头戴式虚拟现实设备,包括眼动识别组件和处理器,所述眼动识别组件包括光源组件、第一相机和第二相机;
所述光源组件配置为在所述头戴式虚拟现实设备使用时,所述光源组件向用户的眼部投射多条条形的光线,至少部分所述条形的光线投射至所述用户的虹膜;
所述第一相机配置为在所述头戴式虚拟现实设备使用时,所述第一相机拍摄用户的第一眼部图片,所述第一眼部图片包括所述条形的光线在用户眼部形成的反射图案;
所述第二相机配置为在所述头戴式虚拟现实设备使用时,所述第二相机拍摄用户的第二眼部图片,所述第二眼部图片包括所述条形的光线在用户眼部形成的反射图案;
所述处理器分别与所述第一相机和所述第二相机信号连接,所述处理器配置为根据所述第一眼部图片和所述第二眼部图片,确定具有深度信息的眼部深度图;根据所述眼部深度图确定用户虹膜中心坐标。
在一种可能的实施方式中,所述第二眼部图片和所述第一眼部图片具有重合区域,所述重合区域包括至少部分虹膜形成的图片;
根据所述第一眼部图片和所述第二眼部图片,确定具有深度信息的眼部深度图,包括:
根据所述第一眼部图片,确定具有深度信息的第一眼部深度图,和/或,根据所述第二眼部图片,确定具有深度信息的第二眼部深度图;
根据所述第一眼部图片和所述第二眼部图片,确定具有深度信息的第三眼部深度图;
融合所述第三眼部深度图与所述第一眼部深度图和所述第二眼部深度图中的至少一者,获取具有深度信息的眼部深度图。
在一种可能的实施方式中,还包括两个镜筒组件,两个所述镜筒组件分别与用户的两只眼睛相对应,且在所述头戴式虚拟现实设备使用时,两个所述镜筒组件向用户展现虚拟景象;
一个所述镜筒组件对应一个所述眼动识别组件,在相对应的所述镜筒组件和所述眼动识别组件中,所述第一相机和所述光源组件设置于所述镜筒组件的下部,所述第二相机设置于所述镜筒组件的上部。
在一种可能的实施方式中,还包括壳体,两个所述镜筒组件设置于所述壳体内,所述壳体的第一侧与所述镜筒组件之间设置有用于容纳用户佩戴眼镜的第一空间,所述壳体的第一侧为所述头戴式虚拟现实设备佩戴于用户眼部时,所述壳体朝向用户眼部的一侧;
在相对应的所述镜筒组件和所述眼动识别组件中,所述第一相机位于所述第一空间沿第一方向的一端的下部,且设置在所述壳体上,所述第一方向为两个所述镜筒组件的中心连线的方向。
在一种可能的实施方式中,在相对应的所述镜筒组件和所述眼动识别组件中,所述第二相机位于靠近另一个所述镜筒组件的一侧,且设置在所述壳体上。
在一种可能的实施方式中,在相对应的所述镜筒组件和所述眼动识别组件中,所述光源组件设置于所述镜筒组件的中心轴的正下部,且设置在所述壳体上。
在一种可能的实施方式中,两个眼动识别组件分别与用户的两只眼睛相对应;
所述处理器根据两个所述眼动识别组件反馈的信息,分别获取用户两只眼睛的虹膜的中心坐标;
所述处理器根据用户两只眼睛的虹膜的中心坐标获取瞳孔间距。
在一种可能的实施方式中,所述光源组件包括垂直腔面发射激光器和衍射光学元件,所述衍射光学元件具有多条纵横垂直交错的线条,在所述头戴式虚拟现实设备佩戴于用户眼部时,所述垂直腔面发射激光器出射的光透过所述衍射光学元件的多条纵横垂直交错的线条出射到用户的眼部,并在用户的眼部形成具有多条纵横垂直交错的线条的投射区域。
在一种可能的实施方式中,所述投射区域至少覆盖长宽均为28mm的正方形区域。
在一种可能的实施方式中,所述光源组件出射的光照射到用户的眼球时的距离为25mm-27mm;
所述衍射光学元件设置的纵向线条和横向线条均大于21条,相邻两条纵向线条和相邻两条横向线条的间距为3.5-5mm。
在一种可能的实施方式中,所述垂直腔面发射激光器为脉冲驱动,脉冲宽度为2微秒,脉冲占空比为0.06%,频率为30Hz。
在一种可能的实施方式中,所述第一相机和所述第二相机的视场角均为90°,景深为20mm-40mm,分辨率均大于400×400,解析力为在56lp/mm频率下,全视场调制传递函数值大于0.5。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1为本公开实施例头戴式虚拟现实设备的示意图;
图2为第一相机位于第一位置时的光线图;
图3为用户佩戴本公开实施例提供的头戴式虚拟现实设备的主视图。
附图标记说明:
100、眼动识别组件;
110、光源组件;120、第一相机;130、第二相机;
200、镜筒组件;
210、显示屏;220、镜筒;230、凸透镜;
300、眼镜;
310、镜框;
400、壳体。
通过上述附图,已示出本公开明确的实施例,后文中将有更详细的描述。这些附图和文字描述并不是为了通过任何方式限制本公开构思的范围,而是通过参考特定实施例为本领域技术人员说明本公开的概念。
具体实施方式
正如背景技术中所述,在相关技术中,头戴式虚拟现实设备的眼球追踪装置能够根据用户的眼球位置确定用户的视线方向和瞳孔间距,然而,相关技术中的眼球跟踪装置,存在标定复杂度高,标定精度低的问题。经发明人研究发现,如果头戴式虚拟现实设备的眼球追踪装置包括红外线相机和多个红外发射二极管,多个红外发射二极管与红外线相机配合实现头戴式虚拟现实设备的眼动识别功能,具体方式为:在头戴式虚拟现实设备佩戴于用户眼部时,多个红外发射二极管向用户的眼球发射光线,以在用户的眼球上形成反射光斑,红外线相机拍摄用户眼球上形成的反射光斑,然后根据光学解算方法解算出瞳孔中心位置,进而获取用户的瞳孔间距,头戴式虚拟现实设备根据用户的瞳孔间距调节其自身的IPD值。然而,由于红外线相机通常设置在头戴式虚拟现实设备的壳体上,其受限于其拍摄角度,使得红外线相机拍摄用户眼球上形成的反射光斑时,常会存在只拍摄到部分反射光斑的情况,这使得红外线相机拍摄的图片难以满足光学解算方法的需求,从而难以解算出瞳孔间距,由此使得多个红外发射二极管与红外线相机配合获取用户的瞳孔间距的方案无法按照设定的频率获取瞳孔间距,影响用户的体验,并且红外线相机拍摄用户眼球上形成的反射光斑时,常会拍摄到用户眼球上的干扰光斑,影响瞳孔间距测量的精确性。
有鉴于此,本公开实施例通过双目相机和3D结构光的方案识别虹膜位置,以获取用户的瞳孔间距,相较于多个红外发射二极管与红外线相机配合获取用户的瞳孔间距的方案,该方案的测量精度显著提高,且能够按照设定的频率获取瞳孔间距,提高用户的体验。
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开的一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
图1为本公开实施例头戴式虚拟现实设备的示意图;图2为第一相机位于第一位置时的光线图;图3为用户佩戴本公开实施例提供的头戴式虚拟现实设备的主视图。
如图1所示,本实施例提供的头戴式虚拟现实设备包括眼动识别组件100和处理器,眼动识别组件100包括光源组件110、第一相机120和第二相机130,光源组件110配置为在头戴式虚拟现实设备使用时,光源组件110向用户的眼部投射多条条形的光线,至少部分条形的光线投射至用户的虹膜;第一相机120配置为在头戴式虚拟现实设备使用时,第一相机120拍摄用户的第一眼部图片,第一眼部图片包括条形的光线在用户眼部形成的反射图案;第二相机130配置为在头戴式虚拟现实设备使用时,第二相机130拍摄用户的第二眼部图片,第二眼部图片包括条形的光线在用户眼部形成的反射图案;处理器分别与第一相机120和第二相机130信号连接,处理器配置为根据第一眼部图片和第二眼部图片,确定具有深度信息的眼部深度图;根据眼部深度图确定用户虹膜中心坐标。
本公开实施例提供的头戴式虚拟现实设备基于虹膜边缘的深度与巩膜的深度不同,通过获取具有深度信息的眼部深度图,获取部分虹膜边缘的位置坐标,根据虹膜边缘的位置坐标确定用户虹膜中心坐标。
在本公开实施例中,示例性的,本公开实施例提供的头戴式虚拟现实设备根据第一眼部图片和第二眼部图片,确定具有深度信息的眼部深度图的方式可以为如下方式:
根据第一眼部图片,确定具有深度信息的第一眼部深度图,和/或,根据第二眼部图片,确定具有深度信息的第二眼部深度图。也就是说,第一相机120和光源组件110配合,第一相机120拍摄出具有光源组件110发射的条形的光线的第一眼部图片,第一眼部图片经处理器处理,获得具有深度信息的第一眼部深度图,和/或,第二相机130和光源组件110配合,第二相机130拍摄出具有光源组件110发射的条形的光线的第二眼部图片,第二眼部图片经处理器处理,获得具有深度信息的第二眼部深度图。
根据第一眼部图片和第二眼部图片,确定具有深度信息的第三眼部深度图。也就是说,第一相机120拍摄的第一眼部图片和第二眼部图片经处理器处理,获得另外一张具有深度信息的深度图,为第三眼部深度图。
融合第三眼部深度图与第一眼部深度图和第二眼部深度图中的至少一者,获取具有深度信息的眼部深度图。融合两张或三张深度图后获取的眼部深度图的分辨率大于单张深度图的分辨率,两张或三张深度图融合后,根据融合后的深度图获取用户虹膜中心坐标。
本公开实施例的有益效果:在本公开实施例提供的头戴式虚拟现实设备中,通过光源组件、第一相机和第二相机的配合获取具有光源组件发射的条形的光线的第一眼部图片和第二眼部图片,处理器根据第一眼部图片和第二眼部图片确定具有深度信息的眼部深度 图,然后根据眼部深度图确定用户虹膜中心坐标,以此方式获取用户虹膜中心坐标的方式,极大的提高了头戴式虚拟现实设备确定用户虹膜中心坐标的精度,提高了用户的体验。
在上述根据第一眼部图片和第二眼部图片,确定具有深度信息的眼部深度图的方式中,第二眼部图片和第一眼部图片具有重合区域,重合区域包括至少部分虹膜形成的图片。也就是说,第一相机120拍摄的眼部图片中具有至少部分虹膜形成的图片,第二相机130拍摄的眼部图片中具有至少部分虹膜形成的图片,且第一相机120拍摄的眼部图片中虹膜形成的图片和第二相机130拍摄的眼部图片中虹膜的形成图片具有重合部分。
上述根据第一眼部图片和第二眼部图片,确定具有深度信息的眼部深度图的方式包括如下三种情况:
情况1:
根据第一眼部图片,确定具有深度信息的第一眼部深度图。
示例性的,在本公开实施例中,获取具有深度信息的第一眼部深度图的方式为,基于3D结构光的原理,根据第一相机120拍摄的具有光源组件110发射的条形的光线的第一眼部图片,获取具有深度信息的第一眼部深度图。也就是说,该头戴式虚拟现实设备通过第一相机120和光源组件110的配合,基于3D结构光的原理,通过处理器的分析,获取具有深度信息的第一眼部深度图。
根据第一眼部图片和第二眼部图片,确定具有深度信息的第三眼部深度图。
示例性的,在本公开实施例中,获取具有深度信息的第三眼部深度图的方式为,基于双目相机的原理,根据第一相机120拍摄的第一眼部图片和第二相机130拍摄的第二眼部图片,获取具有深度信息的第三眼部深度图。也就是说,该头戴式虚拟现实设备通过第一相机120和第二相机130的配合,基于双目相机的原理,通过处理器的分析,获取具有深度信息的第三眼部深度图。
融合第三眼部深度图与第一眼部深度图,获取具有深度信息的眼部深度图。
示例性的,在本公开实施例中,融合第三眼部深度图与第一眼部深度图的方式可以为,以图像特征融合的方式融合第三眼部深度图和第一眼部深度图,以获取一张深度图,该深度图的分辨率大于第三眼部深度图和第一眼部深度图的分辨率,第三眼部深度图和第一眼部深度图融合后,根据融合后的深度图获取用户虹膜中心坐标。
也就是说,上述根据第一眼部图片和第二眼部图片,确定具有深度信息的眼部深度图的方式中的第一种情况为:基于3D结构光的原理,获取具有深度信息的第一眼部深度图,基于双目相机的原理,获取具有深度信息的第三眼部深度图,第三眼部深度图和第一眼部 深度图形成一张具有更高分辨率的深度图,此方式能够增加虹膜中心坐标获取的准确性。相较于相关技术中,多个红外发射二极管与红外线相机配合获取用户的瞳孔间距的方案,该头戴式虚拟现实设备获取瞳孔间距的方案在精度上能够提高10倍以上,并且该头戴式虚拟现实设备能够按照设定的频率获取瞳孔间距,提高用户体验。
情况2:
根据第二眼部图片,确定具有深度信息的第二眼部深度图。
在本公开实施例中,获取具有深度信息的第二眼部深度图的方式与获取具有深度信息的第一眼部深度图的方式相同,在此不再赘述。
根据第一眼部图片和第二眼部图片,确定具有深度信息的第三眼部深度图。此过程与上述获取具有深度信息的第三眼部深度图的过程相同,在此不再赘述。
融合第三眼部深度图与第二眼部深度图,获取具有深度信息的眼部深度图。同样,在本公开实施例中,融合第三眼部深度图与第二眼部深度图的方式与上述融合第三眼部深度图与第一眼部深度图的方式相同,第三眼部深度图与第二眼部深度图融合后的获得的深度图的分辨率大于第三眼部深度图和第二眼部深度图的分辨率,第三眼部深度图和第二眼部深度图融合后,根据融合后的深度图获取用户虹膜中心坐标。
也就是说,上述根据第一眼部图片和第二眼部图片,确定具有深度信息的眼部深度图的方式中的第二种情况为:基于3D结构光的原理,获取具有深度信息的第二眼部深度图,基于双目相机的原理,获取具有深度信息的第三眼部深度图,第三眼部深度图以及第二眼部深度图融合形成一张具有更高分辨率的深度图,此方式同样能够增加虹膜中心坐标获取的准确性。
情况3:
根据第一眼部图片,确定具有深度信息的第一眼部深度图,根据第二眼部图片,确定具有深度信息的第二眼部深度图。即,该头戴式虚拟现实设备通过第一相机120和光源组件110的配合,基于3D结构光的原理,通过处理器的分析,获取具有深度信息的第一眼部深度图;通过第二相机130和光源组件110的配合,基于3D结构光的原理,通过处理器的分析,获取具有深度信息的第二眼部深度图。
根据第一眼部图片和第二眼部图片,确定具有深度信息的第三眼部深度图。此过程与上述获取具有深度信息的第三眼部深度图的过程相同,在此不再赘述。
融合第三眼部深度图与第一眼部深度图和第二眼部深度图,获取具有深度信息的眼部深度图。
示例性的,在本公开实施例中,融合第一眼部深度图与第二眼部深度图和第三眼部深度图的方式可以为,以图像特征融合的方式融合第一眼部深度图、第二眼部深度图和第三眼部深度图,以获取一张深度图,该深度图的分辨率大于第一眼部深度图、第二眼部深度图和第三眼部深度图的分辨率,第一眼部深度图、第二眼部深度图和第三眼部深度图融合后,根据融合后的深度图获取用户虹膜中心坐标。该方式相较于融合两张深度图的方式,其获取的虹膜中心坐标的精度更高。
此外,根据第一眼部深度图与第二眼部深度图和第三眼部深度图,获取用户虹膜中心坐标的方式还可以为,对比第一眼部深度图和第二眼部深度图,选取一张清晰度更高的图片,以图像特征融合的方式融合第三眼部深度图和经比对后清晰度更高的图片,以获取一张深度图。
也就是说,上述根据第一眼部图片和第二眼部图片,确定具有深度信息的眼部深度图的方式中的第二种情况为:基于3D结构光的原理,获取具有深度信息的第一眼部深度图和第二眼部深度图,基于双目相机的原理,获取具有深度信息的第三眼部深度图,融合三张深度图或融合第三眼部深度图以及第一眼部深度图和第二眼部深度图中分辨率较高的深度图,以形成一张具有更高分辨率的深度图,此方式同样能够增加虹膜中心坐标获取的准确性。
在本实施例中,头戴式虚拟现实设备还包括两个镜筒组件200,两个镜筒组件200分别与用户的两只眼睛相对应,且在头戴式虚拟现实设备使用时,两个镜筒组件200向用户展现虚拟景象;
头戴式虚拟现实设备还包括两个镜筒组件200,两个镜筒组件200分别与用户的两只眼睛相对应,且在头戴式虚拟现实设备使用时,两个镜筒组件200向用户展现虚拟景象。也就是说,其中一个镜筒组件200与用户的左眼相对应,以向用户的左眼展现虚拟镜像,另一个镜筒组件200与用户的右眼相对应,以向用户的右眼展现虚拟镜像。在本实施例中,一个镜筒组件200对应一个眼动识别组件100,在相对应的镜筒组件200和眼动识别组件100中,光源组件110和第一相机120均设置于镜筒组件200的下部,第二相机130设置于镜筒组件200的上部。
在本公开的一些实施例中,两个眼动识别组件100分别与用户的两只眼睛相对应,处理器根据两个眼动识别组件100反馈的信息,分别获取用户两只眼睛的虹膜的中心坐标;然后处理器根据用户两只眼睛的虹膜的中心坐标获取瞳孔间距。其中,两个眼动识别组件100反馈的信息为两个眼动识别组件100的第一相机120分别拍摄的对应的眼睛的第一眼 部图片,两个眼动识别组件100的第二相机130分别拍摄的对应的眼睛的第二眼部图片。也就是说,本公开实施例提供的头戴式虚拟现实设备具有眼动识别功能,本公开实施例提供的头戴式虚拟现实设备通过分析用户的眼动情况,来获取用户的瞳孔间距,进而根据用户的瞳孔间距调节其自身的IPD值。
可选的,两个镜筒组件200分别为第一镜筒组件和第二镜筒组件,对应第一镜筒组件的眼动识别组件100为第一眼动识别组件,对应第二眼动识别组件的眼动识别组件100为第二眼动识别组件,第一眼动识别组件的第一相机120和光源组件110设置于第一镜筒组件的下部,第一眼动识别组件的第二相机130设置于第一镜筒组件的上部。第二眼动识别组件的第一相机120和光源组件110设置于第二镜筒组件的下部,第二眼动识别组件的第二相机130设置于第二镜筒组件的上部。第一眼动识别组件和第二眼动识别组件相对于第一平面镜像设置,第一相机120设置在下部,第二相机130设置在上部,两个相机拍摄的视角不同,利于眼动识别的精度,并且第一眼动识别组件和第二眼动识别组件相对于第一平面镜像设置,使得第一眼动识别组件的第一相机120拍摄第一眼部图片的角度与第二眼动识别组件的第一相机120拍摄第一眼部图片的角度相同,以及第一眼动识别组件的第二相机130拍摄第二眼部图片的角度与第二眼动识别组件的第二相机130拍摄第二眼部图片的角度相同,利于处理器分析结果的准确性,即能够使得眼动识别的精确度更高,测算的IPD值更精准。
如图3所示,镜筒组件200包括镜筒220以及沿镜筒220的轴向设置于镜筒220两侧的显示屏210和凸透镜230,其中,显示屏210设置于镜筒220背离用户眼睛的一侧,凸透镜230设置于镜筒220朝向用户眼睛的一侧。
请继续参照图3,头戴式虚拟现实设备还包括壳体400,两个镜筒组件200设置于壳体400内,壳体400的第一侧与镜筒组件200之间设置有容纳用户佩戴眼镜300的第一空间,壳体400的第一侧为头戴式虚拟现实设备佩戴于用户眼部时,壳体400朝向用户眼部的一侧。也就是说,本公开实施例提供的头戴式虚拟现实设备能够适配于带眼镜300的用户,提高佩戴眼镜300的用户的体验感。
值得说明的是,用户佩戴的眼镜300可以为近视眼镜300,也可以为远视眼镜300,还可以为老花镜。
可选的,第一相机120为眼部追踪(Eye-Tracking,简称为ET)相机,第二相机130为眼部追踪(Face-Tracking,简称为FT)相机。
如图2所示,为了避免用户佩戴的眼镜300的边缘遮挡第一相机120拍摄用户眼部图片的视角,在相对应的镜筒组件200和眼动识别组件100中,第一相机120位于第一空间沿第一方向的一端的下部,且设置在壳体400上,第一方向为两个镜筒组件200的中心连线的方向。也就是说,第一眼动识别组件的第一相机120位于第一空间沿第一方向的一端的下部,且设置在壳体400上。相应的,基于第一眼动识别组件和第二眼动识别组件相对于第一平面镜像设置,第二眼动识别组件的第一相机120位于第一空间沿第一方向的另一端的下部,且设置在壳体400上。即如图3所示,在头戴式虚拟现实设备佩戴于用户眼部时,在第一眼动识别组件与用户的右眼相对应,第一眼动识别组件的第一相机120位于用户眼镜300镜框310的右侧边缘的下部,且设置在壳体400上,此设置能够使得第一相机120拍摄用户右眼部照片时,用户佩戴的眼镜300的边缘不会遮挡第一相机120拍摄用户眼部图片的视角,提高了头戴式虚拟现实设备眼动识别的精确度。
值得说明的是,图2为第一相机120位于第一位置时的光线图,其中,第一位置为在头戴式虚拟现实设备佩戴于用户眼部时,用户眼镜300左镜片的左下部,且位于壳体400边缘的位置。
在本公开的实施例中,如图1所示,在相对应的镜筒组件200和眼动识别组件100中,第二相机130位于靠近另一个镜筒组件200的一侧,且设置在壳体400上。也就是说,第一眼动识别组件的第二相机130设置于第一镜筒组件的上部,且位于靠近第二镜筒组件的一侧。也就是说,在头戴式虚拟现实设备佩戴于用户眼部时,第一眼动识别组件的第二相机130和第二眼动识别组件的第二相机130分别位于用户鼻梁的两侧。
请继续参照图1,在相对应的镜筒组件200和眼动识别组件100中,光源组件110设置于镜筒组件200的中心轴的正下部,且设置在壳体400上。也就是说,第一眼动识别组件的光源组件110设置于第一镜筒组件的下部,且位于第一镜筒组件的中心轴的正下部。相应的,基于第一眼动识别组件和第二眼动识别组件相对于第一平面镜像设置,第二眼动识别组件的光源组件110设置于第二镜筒组件的下部,且位于第二镜筒组件的中心轴的正下部。此设置能够使得光源组件110能够在用户眼球上投射畸变小的结构光条纹,利于瞳孔间距测算精确度的提高。
在本公开的实施例中,示例性的,光源组件110包括垂直腔面发射激光器和衍射光学元件,衍射光学元件具有多条纵横垂直交错的线条,在头戴式虚拟现实设备佩戴于用户眼部时,垂直腔面发射激光器出射的光透过衍射光学元件的多条纵横垂直交错的线条出射到用户的眼部,并在用户的眼部形成具有多条纵横垂直交错的线条的投射区域。也就是说, 衍射光学元件的出光面上设置有网格状的图案,垂直腔面发射激光器出射的光照射到衍射光学元件,经过衍射光学元件的多条纵横垂直交错的线条出射到用户的眼部,并在用户的眼部形成网格状的反射光斑。
为了使得设置于镜筒组件200上部的第二相机130能够拍摄到具有网格状的反射光斑的眼部图片,投射区域至少覆盖长宽均为28mm的正方形区域,长宽均为28mm的正方形区域能够覆盖用户的眼部,使得设置于镜筒组件200上部的第二相机130能够拍摄到具有网格状的反射光斑的眼部图片。
为了使得光源组件110和第一相机120配合能够获取较为精确的第一眼部深度图,以及为了使得光源组件110和第二相机130配合能够获取较为精确的第二眼部深度图,光源组件110出射的光照射到用户的眼球时的距离为25mm-27mm,衍射光学元件设置的纵向线条和横向线条均大于21条,相邻两条纵向线条和相邻两条横向线条的间距为3.5-5mm。
在本公开实施例中,示例性的,垂直腔面发射激光器为脉冲驱动,脉冲宽度为2微秒,脉冲占空比为0.06%,频率为30Hz。此设置能够降低垂直腔面发射激光器的功耗。
示例性的,为了使得第一相机120和第二相机130拍摄的眼部图片能够满足需求,不受安装误差和拍摄的图片的暗边的影响,第一相机120和第二相机130的视场角均为90°。此外,基于第一相机120和第二相机130设置的位置,第一相机120和第二相机130的景深为20mm-40mm,分辨率均大于400×400,解析力为在56lp/mm频率下,全视场调制传递函数值大于0.5。此设置能够使得第一相机120和第二相机130能够拍摄出较好的且满足需求的眼部图片,以及与光源组件110配合获取具有深度信息的深度图片。
示例性的,上述第一相机120和第二相机130可识别的最小线宽与景深之间的关系如表1所示:
表1第一相机和第二相机可识别的最小线宽与景深之间的关系表
本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用 技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求书指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求书来限制。

Claims (12)

  1. 一种头戴式虚拟现实设备,包括眼动识别组件和处理器,其中,所述眼动识别组件包括光源组件、第一相机和第二相机;
    所述光源组件配置为在所述头戴式虚拟现实设备使用时,所述光源组件向用户的眼部投射多条条形的光线,至少部分所述条形的光线投射至所述用户的虹膜;
    所述第一相机配置为在所述头戴式虚拟现实设备使用时,所述第一相机拍摄用户的第一眼部图片,所述第一眼部图片包括所述条形的光线在用户眼部形成的反射图案;
    所述第二相机配置为在所述头戴式虚拟现实设备使用时,所述第二相机拍摄用户的第二眼部图片,所述第二眼部图片包括所述条形的光线在用户眼部形成的反射图案;
    所述处理器分别与所述第一相机和所述第二相机信号连接,所述处理器配置为根据所述第一眼部图片和所述第二眼部图片,确定具有深度信息的眼部深度图;根据所述眼部深度图确定用户虹膜中心坐标。
  2. 根据权利要求1所述的头戴式虚拟现实设备,其中,所述第二眼部图片和所述第一眼部图片具有重合区域,所述重合区域包括至少部分虹膜形成的图片;
    根据所述第一眼部图片和所述第二眼部图片,确定具有深度信息的眼部深度图,包括:
    根据所述第一眼部图片,确定具有深度信息的第一眼部深度图,和/或,根据所述第二眼部图片,确定具有深度信息的第二眼部深度图;
    根据所述第一眼部图片和所述第二眼部图片,确定具有深度信息的第三眼部深度图;
    融合所述第三眼部深度图与所述第一眼部深度图和所述第二眼部深度图中的至少一者,获取具有深度信息的眼部深度图。
  3. 根据权利要求1或2所述的头戴式虚拟现实设备,还包括两个镜筒组件,其中,两个所述镜筒组件分别与用户的两只眼睛相对应,且在所述头戴式虚拟现实设备使用时,两个所述镜筒组件向用户展现虚拟景象;
    一个所述镜筒组件对应一个所述眼动识别组件,在相对应的所述镜筒组件和所述眼动识别组件中,所述第一相机和所述光源组件设置于所述镜筒组件的下部,所述第二相机设置于所述镜筒组件的上部。
  4. 根据权利要求3所述的头戴式虚拟现实设备,还包括壳体,其中,两个所述镜筒组件设置于所述壳体内,所述壳体的第一侧与所述镜筒组件之间设置有用于容纳用户佩戴眼镜的第一空间,所述壳体的第一侧为所述头戴式虚拟现实设备佩戴于用户眼部时,所述壳体朝向用户眼部的一侧;
    在相对应的所述镜筒组件和所述眼动识别组件中,所述第一相机位于所述第一空间沿第一方向的一端的下部,且设置在所述壳体上,所述第一方向为两个所述镜筒组件的中心连线的方向。
  5. 根据权利要求4所述的头戴式虚拟现实设备,其中,在相对应的所述镜筒组件和所述眼动识别组件中,所述第二相机位于靠近另一个所述镜筒组件的一侧,且设置在所述壳体上。
  6. 根据权利要求4或5所述的头戴式虚拟现实设备,其中,在相对应的所述镜筒组件和所述眼动识别组件中,所述光源组件设置于所述镜筒组件的中心轴的正下部,且设置在所述壳体上。
  7. 根据权利要求3-6任一项所述的头戴式虚拟现实设备,其中,两个眼动识别组件分别与用户的两只眼睛相对应;
    所述处理器根据两个所述眼动识别组件反馈的信息,分别获取用户两只眼睛的虹膜的中心坐标;
    所述处理器根据用户两只眼睛的虹膜的中心坐标获取瞳孔间距。
  8. 根据权利要求1-7任一项所述的头戴式虚拟现实设备,其中,所述光源组件包括垂直腔面发射激光器和衍射光学元件,所述衍射光学元件具有多条纵横垂直交错的线条,在所述头戴式虚拟现实设备佩戴于用户眼部时,所述垂直腔面发射激光器出射的光透过所述衍射光学元件的多条纵横垂直交错的线条出射到用户的眼部,并在用户的眼部形成具有多条纵横垂直交错的线条的投射区域。
  9. 根据权利要求8所述的头戴式虚拟现实设备,其中,所述投射区域至少覆盖长宽均为28mm的正方形区域。
  10. 根据权利要求8或9所述的头戴式虚拟现实设备,其中,所述光源组件出射的光照射到用户的眼球时的距离为25mm-27mm;
    所述衍射光学元件设置的纵向线条和横向线条均大于21条,相邻两条纵向线条和相邻两条横向线条的间距为3.5-5mm。
  11. 根据权利要求8-10任一项所述的头戴式虚拟现实设备,其中,所述垂直腔面 发射激光器为脉冲驱动,脉冲宽度为2微秒,脉冲占空比为0.06%,频率为30Hz。
  12. 根据权利要求1-11任一项所述的头戴式虚拟现实设备,其中,所述第一相机和所述第二相机的视场角均为90°,景深为20mm-40mm,分辨率均大于400×400,解析力为在56lp/mm频率下,全视场调制传递函数值大于0.5。
PCT/CN2023/113818 2022-09-07 2023-08-18 头戴式虚拟现实设备 WO2024051476A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211091847.6 2022-09-07
CN202211091847.6A CN117666136A (zh) 2022-09-07 2022-09-07 头戴式虚拟现实设备

Publications (1)

Publication Number Publication Date
WO2024051476A1 true WO2024051476A1 (zh) 2024-03-14

Family

ID=90083353

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/113818 WO2024051476A1 (zh) 2022-09-07 2023-08-18 头戴式虚拟现实设备

Country Status (2)

Country Link
CN (1) CN117666136A (zh)
WO (1) WO2024051476A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107424187A (zh) * 2017-04-17 2017-12-01 深圳奥比中光科技有限公司 深度计算处理器、数据处理方法以及3d图像设备
CN108475109A (zh) * 2015-12-28 2018-08-31 奥特逻科集团 眼睛姿态跟踪
CN108540717A (zh) * 2018-03-31 2018-09-14 深圳奥比中光科技有限公司 目标图像获取系统与方法
CN108985172A (zh) * 2018-06-15 2018-12-11 北京七鑫易维信息技术有限公司 一种基于结构光的视线追踪方法、装置、设备及存储介质
US20190196221A1 (en) * 2017-12-22 2019-06-27 Optikam Tech, Inc. System and Method of Obtaining Fit and Fabrication Measurements for Eyeglasses Using Simultaneous Localization and Mapping of Camera Images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108475109A (zh) * 2015-12-28 2018-08-31 奥特逻科集团 眼睛姿态跟踪
CN107424187A (zh) * 2017-04-17 2017-12-01 深圳奥比中光科技有限公司 深度计算处理器、数据处理方法以及3d图像设备
US20190196221A1 (en) * 2017-12-22 2019-06-27 Optikam Tech, Inc. System and Method of Obtaining Fit and Fabrication Measurements for Eyeglasses Using Simultaneous Localization and Mapping of Camera Images
CN108540717A (zh) * 2018-03-31 2018-09-14 深圳奥比中光科技有限公司 目标图像获取系统与方法
CN108985172A (zh) * 2018-06-15 2018-12-11 北京七鑫易维信息技术有限公司 一种基于结构光的视线追踪方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN117666136A (zh) 2024-03-08

Similar Documents

Publication Publication Date Title
CN106797423B (zh) 视线检测装置
US10048750B2 (en) Content projection system and content projection method
WO2016115873A1 (zh) 双目ar头戴显示设备及其信息显示方法
TWI486631B (zh) 頭戴式顯示裝置及其控制方法
JP2019527377A (ja) 視線追跡に基づき自動合焦する画像捕捉システム、デバイス及び方法
WO2018076202A1 (zh) 能够进行人眼追踪的头戴式可视设备及人眼追踪方法
US10360450B2 (en) Image capturing and positioning method, image capturing and positioning device
WO2015051606A1 (zh) 定位方法及定位系统
CN104898276A (zh) 头戴式显示装置
KR101613091B1 (ko) 시선 추적 장치 및 방법
WO2022017447A1 (zh) 图像显示控制方法、图像显示控制装置及头戴式显示设备
US11860375B2 (en) Virtual reality display device and method for presenting picture
WO2015035745A1 (zh) 信息观察方法及信息观察装置
CN108985291B (zh) 一种基于单摄像头的双眼追踪系统
TW201814356A (zh) 頭戴顯示裝置與其鏡片位置調整方法
KR20200035003A (ko) 정보 처리 장치, 정보 처리 방법, 및 프로그램
CN112306229A (zh) 电子装置、控制方法和计算机可读介质
KR20180109669A (ko) 가상 오브젝트의 처리가 가능한 스마트 안경
CN111338176A (zh) 折叠光路几何全息显示系统
WO2016101861A1 (zh) 头戴式显示装置
CN108535866B (zh) 虚拟现实眼镜
KR101817436B1 (ko) 안구 전위 센서를 이용한 영상 표시 장치 및 제어 방법
WO2024051476A1 (zh) 头戴式虚拟现实设备
CN111338175A (zh) 透射式几何全息显示系统
TW202334702A (zh) 具有用於像差感測偵測器的收集光學件的顯示系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23862173

Country of ref document: EP

Kind code of ref document: A1