WO2020214162A1 - Measurement and calibration method for augmented reality/virtual reality (ar/vr) binocular alignment errors technical field - Google Patents

Measurement and calibration method for augmented reality/virtual reality (ar/vr) binocular alignment errors technical field Download PDF

Info

Publication number
WO2020214162A1
WO2020214162A1 PCT/US2019/027905 US2019027905W WO2020214162A1 WO 2020214162 A1 WO2020214162 A1 WO 2020214162A1 US 2019027905 W US2019027905 W US 2019027905W WO 2020214162 A1 WO2020214162 A1 WO 2020214162A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
arvr
display
alignment
channel
Prior art date
Application number
PCT/US2019/027905
Other languages
French (fr)
Inventor
Zhiqiang Liu
Original Assignee
Huawaei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawaei Technologies Co., Ltd. filed Critical Huawaei Technologies Co., Ltd.
Priority to PCT/US2019/027905 priority Critical patent/WO2020214162A1/en
Publication of WO2020214162A1 publication Critical patent/WO2020214162A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches

Definitions

  • Embodiments of the present disclosure relate to the field of virtual reality, and in particular, to a method and apparatus for performing alignment of an augmented reality/virtual reality (ARVR) device.
  • ARVR augmented reality/virtual reality
  • Virtual reality is an artificial environment that is created with a mixture of interactive hardware and software.
  • Augmented reality overlays virtual objects on the real- world environment, thus providing a composite view.
  • AR/VR or ARVR, has many uses including gaming, education, military, and healthcare.
  • ARVR is typically viewed through an eye-covering headset device to fully immerse a user.
  • a first aspect relates to a computer-implemented method for performing alignment of an augmented reality/virtual reality (ARYR) device.
  • the method includes displaying an alignment image on a first display and on a second display of the ARYR device.
  • the method captures at least one image of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR device using a single camera through a channel combiner.
  • the method analyzes the at least one image to determine misalignment data of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR.
  • the method uses the misalignment data to adjust a virtual image displayed on the first display and the second display of the ARVR device.
  • the single camera is a camera integrated in a smartphone that is communicatively coupled to the ARVR device.
  • analyzing the at least one image to determine misalignment data of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR is performed by an application on the smartphone.
  • using the misalignment data to adjust the virtual image displayed on the first display and the second display of the ARVR device comprises sending, by the smartphone, to the ARVR device a single image with misalignment corrections data for the first display and the second display of the ARVR device.
  • using the misalignment data to adjust the virtual image displayed on the first display and the second display of the ARVR device comprises sending, by the smartphone, to the ARVR device a left alignment corrected image and right alignment corrected image.
  • the smartphone that is communicatively coupled to the ARVR device via USB-C connector cable.
  • the channel combiner is an optical subassembly that is externally attached to the smartphone next to the camera of the smartphone.
  • the channel combiner is an optical subassembly that is externally attached to the ARVR device and aligned to the first display and the second display of the ARVR device.
  • the at least one image comprises a first image of the alignment image on the first display of the ARVR device and a second image of the alignment image on the second display of the ARVR.
  • the at least one image is a single image that comprises a first image of the alignment image on the first display of the ARVR device overlapped with a second image of the alignment image on the second display of the ARVR.
  • the channel combiner comprises a first channel for capturing the first image of the alignment image and a second channel for capturing the second image of the alignment image.
  • the channel combiner is configured to reflect the first channel using a mirror surface towards a beam splitting surface that splits the first channel and directs a portion of the first channel towards the camera.
  • the channel combiner is further configured to direct the second channel towards the beam splitting surface that splits the second channel and directs a portion of the second channel towards the camera.
  • the mirror surface is a forty-five degree mirror surface and the beam splitting surface is a forty-five degree beam splitting surface.
  • the alignment image includes at least two reference coordinate points that are used for determining the misalignment data.
  • a second aspect relates to a calibration system for performing alignment of an augmented ARVR device.
  • the calibration system includes a channel combiner and a mobile device.
  • the mobile device includes a camera, a processor, and memory.
  • the memory stores an application that includes computer executable instructions.
  • the processor is configured to execute the computer executable instructions to display an alignment image to be displayed on a first display and on a second display of the ARVR device; capture, through the channel combiner, at least one image of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR device using the camera; analyze the at least one image to determine misalignment data of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR; and use the misalignment data to adjust a virtual image displayed on the first display and the second display of the ARVR device.
  • the mirror surface is a forty-five degree mirror surface and the beam splitting surface is a forty-five degree beam splitting surface.
  • the mobile device is a smartphone.
  • the processor is further configured to execute the computer executable instructions to send a single image with misalignment corrections data for the first display and the second display of the ARVR device for displaying the virtual image.
  • the processor is further configured to execute the computer executable instructions to send to the ARVR device a left alignment corrected image and right alignment corrected image for displaying the virtual image.
  • the mobile device is communicatively coupled to the ARVR device via a USB-C connector cable.
  • the channel combiner is an optical subassembly that is externally attached to the mobile device next to the camera of the mobile device.
  • the channel combiner is an optical subassembly that is externally attached to the ARVR device and aligned to the first display and the second display of the ARVR device.
  • the at least one image comprises a first image of the alignment image on the first display of the ARVR device and a second image of the alignment image on the second display of the ARVR.
  • the at least one image is a single image that comprises a first image of the alignment image on the first display of the ARVR device overlapped with a second image of the alignment image on the second display of the ARVR.
  • the alignment image includes at least two reference coordinate points that are used for determining the misalignment data.
  • a third aspect relates to an apparatus or system comprising means for performing any of the preceding aspects as such or any preceding implementation form of any of the preceding aspects.
  • FIG. 1 is a schematic diagram illustrating a conventional measurement method for performing alignment of an ARVR device.
  • FIGS. 2A-2C are schematic diagrams illustrating user eye impact from the use of an ARVR device in accordance with an embodiment of the present disclosure.
  • FIG. 3 is a schematic diagram illustrating a measurement method for performing alignment of an ARVR device in accordance with an embodiment of the present disclosure.
  • FIG. 4A is a schematic diagram illustrating a first embodiment of an optical component of a channel combiner in accordance with an embodiment of the present disclosure.
  • FIG. 4B is a schematic diagram illustrating misalignment of the first embodiment of an optical component of a channel combiner in accordance with an embodiment of the present disclosure.
  • FIG. 4C is a schematic diagram illustrating a second embodiment of an optical component of a channel combiner in accordance with an embodiment of the present disclosure.
  • FIG. 5A is a schematic diagram illustrating a third embodiment of an optical component of a channel combiner in accordance with an embodiment of the present disclosure.
  • FIG. 5B is a schematic diagram illustrating a fourth embodiment of an optical component of a channel combiner in accordance with an embodiment of the present disclosure.
  • FIG. 6A is a schematic diagram illustrating a first image for performing alignment of an ARVR device in accordance with an embodiment of the present disclosure.
  • FIG. 6B is a schematic diagram illustrating a second image for performing alignment of an ARVR device in accordance with an embodiment of the present disclosure.
  • FIG. 7 provides a schematic diagram illustrating misalignment of the first image and the second image in accordance with an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram illustrating a connection between an ARVR device and a smartphone in accordance with an embodiment of the present disclosure.
  • FIG. 9 is a flowchart illustrating a process for determining whether an ARVR device requires alignment in accordance with an embodiment of the present disclosure.
  • FIG. 10 is a flowchart illustrating a measurement alignment process in accordance with an embodiment of the present disclosure.
  • FIG. 11A is a schematic diagram illustrating a first method for displaying an image on an ARVR device in accordance with an embodiment of the present disclosure.
  • FIG. 1 IB is a schematic diagram illustrating a second method for displaying an image on an ARVR device in accordance with an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram illustrating an apparatus in accordance with an embodiment of the present disclosure.
  • FIG. 13 is a schematic diagram illustrating an apparatus according to an embodiment of the disclosure.
  • a module or unit as referenced herein may comprise one or more hardware or electrical components such as electrical circuitry, processors, and memory that may be specially configured to perform a particular function.
  • the memory may be volatile memory or non volatile memory that stores data such as, but not limited to, computer executable instructions, machine code, and other various forms of data.
  • the module or unit may be configured to use the data to execute one or more instructions to perform one or more tasks.
  • a unit may also refer to a particular set of functions, software instructions, or circuitry that is configured to perform a specific task.
  • the disclosed embodiments include an optical component for combining the two display channels and using only a single camera for capturing alignment error.
  • the optical component can be a prism, a reflective waveguide, or a diffractive waveguide.
  • the alignment of the two channels of the test system can be either through high precision optics or through a calibration process.
  • the camera can be one of the cameras integrated with a smartphone, or a standalone camera.
  • the disclosed embodiments can also enable control and data analysis to be performed on smartphones, or on other computing devices.
  • the disclosed embodiments are simpler, less prone to environmental impact, and more cost effective.
  • the disclosed embodiments provide a portable solution and can be provided as an accessory of ARVR products to enhance user experience.
  • FIG. 1 is a schematic diagram illustrating a conventional measurement method 100 for performing alignment of an ARVR device 1 10.
  • the conventional measurement method 100 uses a binocular alignment jig 120 to support or align the ARVR device 110.
  • the ARVR device 110 can be any type of device that is capable of displaying a virtual reality environment or augmented reality environment to a user.
  • the ARVR device 110 can be a pair of glasses or an eye-covering headset device that fully immerses a user in the ARVR environment.
  • the ARVR device 110 includes a display module 112 and a display module 114.
  • the display module 112 is configured to display a first image that is presented to the left eye of a user.
  • the display module 1 14 is configured to display a second image that is presented to the right eye of the user. When viewed together, the first image and the second image create the perception of a virtual image 102.
  • the virtual image 102 can be a virtual reality image or an augmented reality image.
  • the conventional measurement method 100 uses a binocular measurement jig 120.
  • the binocular measurement jig 120 includes camera 122 and camera 124.
  • Each of the cameras 122, 124 can include a camera lens coupled to the camera to enable zooming or focusing of an image.
  • the camera 122 and camera 124 are each used to capture a frame or image of the virtual image 102 that is displayed on the display module 112 and the display module 114. The frame or image is then analyzed to determine an alignment error of the ARVR device 1 10.
  • Several shortcomings that are identified herein associated with the conventional measurement method 100 include the required use of a specialized binocular measurement jig 120 that is complicated, costly, and generally not available to a user. Additionally, the binocular measurement jig 120 is prone to environment impact. For example, calibration of the binocular measurement jig 120 needs to be repeated any time when the relative alignment between camera 122 and camera 124 changes. Accordingly, the disclosed embodiments provide an improved method and apparatus for improving the conventional method for determining the binocular alignment error as will be described herein.
  • FIGS. 2A-2C are a schematic diagrams illustrating user eye impact from the use of an ARVR device in accordance with an embodiment of the present disclosure.
  • FIG. 2A is a schematic diagram illustrating a pair of normal eyes when the display modules of an ARVR device are properly aligned.
  • FIG. 2B is a schematic diagram illustrating a pair of eyes that converge towards the center and
  • FIG. 2C is a schematic diagram illustrating a pair of eyes that diverge away from the center.
  • Both FIGS. 2B and 2C illustrate the potential impact of misaligned ARVR displays on a pair of eyes.
  • the misalignment can cause user discomfort even with short term use of an ARVR device.
  • the misalignment can also cause permanent damage to eye sight and depth perception from extended use of the ARVR device.
  • FIG. 3 is a schematic diagram illustrating a measurement method 300 for performing alignment of an ARVR device in accordance with an embodiment of the present disclosure.
  • the measurement method 300 simplifies the hardware and software of the existing measurement system by using a channel combiner 302 and a single camera 304 for performing alignment of the ARVR device 1 10.
  • the channel combiner 302 is a device that optically combines two channels into one. In certain embodiments, the channel combiner 302 is compact and rigid, and makes it possible to keep the system simpler, portable, and low cost. Various designs and configurations of several embodiments of the channel combiner 302 will be described herein.
  • the single camera 304 is a camera that is integrated within a smartphone or other portable devices that include a camera (e.g., a tablet or other mobile computing devices). By leveraging the camera of a smartphone, it is possible to deploy the measurement method 300 solution in the field if necessary.
  • FIG. 4A is a schematic diagram illustrating a first embodiment of an optical component 410 of a channel combiner in accordance with an embodiment of the present disclosure.
  • the optical component 410 comprises of a mirror 412 and a beam splitter 416.
  • the mirror 412 is configured to reflect an incoming optical beam 414 containing an image from the left display module of the ARVR device 1 10 to the beam splitter 416.
  • the beam splitter 416 is also configured to receive an incoming optical beam 418 from the right display module of the ARVR device 110.
  • the beam splitter 416 is an optical device which can split a light beam into two beams, which may or may not have the same optical power.
  • the beam splitter 416 is configured to split the optical beam 414 and the optical beam 418, and directs the split optical beam 414 and optical beam 418 towards the camera 304.
  • the mirror 412 surface is a forty-five degree mirror surface and the beam splitting surface 416 is a forty-five degree beam splitting surface.
  • the optical component 410 can include one or more anti-reflective (AR) surfaces 420.
  • FIG. 4B is a schematic diagram illustrating misalignment of the first embodiment of the optical component 410 of a channel combiner in accordance with an embodiment of the present disclosure.
  • the disclosed embodiment illustrates that even when the optical component 410 of a channel combiner is not properly aligned (e.g., when a user does not properly align the channel combiner to the ARVR device 110 or to the camera 304/smartphone device), the optical component 410 can still direct the optical beam 414 and the optical beam 418 towards the camera 304 in the same manner as described in FIG. 4A.
  • the disclosed embodiments are relatively, to a certain degree, measurement insensitive to misalignment among an ARVR product under test, a channel combiner, and a camera.
  • FIG. 4C is a schematic diagram illustrating a second embodiment of an optical component 430 of a channel combiner in accordance with an embodiment of the present disclosure.
  • the configuration of the optical component 430 is same as the optical component 410 except that it is flipped so that the mirror 412 is configured to reflect the incoming optical beam 418 containing an image from the right display module of the ARVR device 1 10 to the beam splitter 416.
  • the beam splitter 416 is also configured to receive the incoming optical beam 414 from the left display module of the ARVR device 110.
  • the beam splitter 416 is configured to split the optical beam 414 and the optical beam 418.
  • the beam splitter 416 directs the split optical beam 414 and optical beam 418 towards the camera 304.
  • FIG. 5A is a schematic diagram illustrating a third embodiment of an optical component 510 of a channel combiner in accordance with an embodiment of the present disclosure.
  • the optical component 510 is an optical waveguide that can include one or more diffractive gratings 512.
  • the diffractive grating 512 is an optical component with a periodic structure that splits and diffracts light into several beams travelling in different directions.
  • the optical beam 414 from the left display module of the ARVR device 110 enters the optical component 510 and hits a first diffractive grating 512.
  • the first diffractive grating 512 splits and diffracts the optical beam 414.
  • At least one of the diffracted optical beams 414 is reflected internally inside the waveguide towards a second diffractive grating 512 that directs the diffracted optical beam 414 towards the camera 304.
  • the second diffractive grating 512 also receives the incoming optical beam 418 containing an image from the right display module of the ARVR device 1 10 and diffracts the optical beam 418 towards the camera 304.
  • FIG. 5B is a schematic diagram illustrating a fourth embodiment of an optical component 520 of a channel combiner in accordance with an embodiment of the present disclosure.
  • the optical component 520 is the same as the optical component 510 except that it has a reversed configuration, where a first diffractive grating 512 diffracts the optical beam 418 containing an image from the right display module of the ARVR device 1 10.
  • the optical beam 418 is diffracted internally within the waveguide towards a second diffractive grating 512, which directs the optical beam 418 and optical beam 414 towards the camera 304.
  • FIG. 6A is a schematic diagram illustrating a first image 610 for performing alignment of an ARVR device in accordance with an embodiment of the present disclosure.
  • FIG. 6B is a schematic diagram illustrating a second image 620 for performing alignment of an ARVR device in accordance with an embodiment of the present disclosure.
  • the first image 610 and second image 620 include two coordinates A and B that are used to determine misalignment of the two display modules of the ARVR device.
  • An additional number of coordinates may be used in various embodiments for determining misalignment and the two display modules.
  • the coordinates A and B for the first image 610 can be labeled as A: (xal, yal), B: (xbl, ybl).
  • the coordinates A and B for the second image 620 can be labeled as A: (xa2, ya2), B: (xb2, yb2).
  • FIG. 7 provides a schematic diagram illustrating misalignment of the first image 610 and the second image 620 in accordance with an embodiment of the present disclosure.
  • the misalignment calculation for the horizontal (X) direction is determined using a formula xa2-xal and the misalignment calculation for the vertical (Y) direction is determined using a formula ya2-yal .
  • the rotation alignment can be determined using the formula: atan[(ybl-yal)/(xbl-xal)]-atan[(yb2-ya2)/(xb2-xa2)].
  • the horizontal, vertical, and rotation calculations are then used to align the first image 610 and the second image 620.
  • FIG. 8 is a schematic diagram illustrating a connection between an ARVR device 800 and a smartphone 820 in accordance with an embodiment of the present disclosure.
  • the disclosed embodiments use a single camera 304 and a channel combiner 302 for performing alignment of the display modules of the ARVR device 800.
  • the channel combiner 302 can be a separate module or device that is attached to either the ARVR device 800 or the smartphone 820.
  • the camera 304 can be a standalone camera or a camera integrated into another device such as, but not limited to, the smartphone 820.
  • the smartphone 820 is not limited to any particular make or model.
  • the smartphone 820 can also be integrated with multiple cameras.
  • other electronic devices with integrated cameras may also be used. Non-limiting examples include smart glasses, smart watches, laptops, and tablets with built in camera(s).
  • an application may be loaded onto the smartphone 820, or other electronic device, for performing alignment of the ARVR device 800.
  • the application may be configured to select any of the cameras or a specific camera (e.g., primary camera, wide-angle camera, etc.) of the smartphone 820 for performing alignment of the ARVR device 800.
  • the selection of the camera may be based on an aspect ratio or picture quality of the image being displayed on the display modules of the ARVR device 800.
  • the smartphone 820 may be communicatively coupled to the ARVR device 800 via a wired connection using any type of wired communication protocol (e.g., universal serial bus (USB), Tightning, microUSB, and USB-C).
  • the smartphone 820 may be communicatively coupled to the ARVR device 800 via a wireless connection such as, Bluetooth, WI-FI, or other wireless technologies.
  • FIG. 9 is a flowchart illustrating a process 900 for determining whether an ARVR device requires alignment in accordance with an embodiment of the present disclosure.
  • the process 900 may be performed by smartphone 820 or other electronic devices.
  • the process 900 begins at step 902 by establishing a connection with the ARVR device.
  • a connection can be established using a mobile app or application that is configured to communicate with the ARVR device.
  • the process 900 determines whether the ARVR device requires an alignment calibration process.
  • an alignment calibration process can be required after a predetermined number of uses, after a certain time period, if the ARVR device is new, if the ARVR device is connected to a new image source, or if it is detected that the ARVR device has fallen or had a hard impact that may affect the alignment of the images of the display modules. If the process 900 determines at step 904 that an alignment calibration process is required, the process 900, at step 906, performs a measurement alignment process for aligning the display modules of the ARVR device as described below in FIG. 10. Otherwise, the process 900, at step 908, displays ARVR content on ARVR device the display modules of the ARVR device, with process 900 terminating thereafter. In some embodiments, the ARVR content may be provided by the smartphone 820 or other electronic device, or may be directly downloaded or installed on the ARVR device.
  • FIG. 10 is a flowchart illustrating a measurement alignment process 1000 in accordance with an embodiment of the present disclosure.
  • the process 1000 may be performed by smartphone 820 or other electronic devices communicatively coupled to the ARVR device.
  • the process 1000 begins, at step 1002, by displaying two alignment patterns on the two displays modules of the ARVR device.
  • the process 1000 at step 1004, uses a single camera to take at least one image from each of the display modules through a channel combiner.
  • the process 1000 analyzes the images to determine misalignment data of the two display modules.
  • One embodiment for determining the misalignment data is described with reference to FIGS. 6A-6B and FIG. 7.
  • the process 1000 at step 1008, adjusts alignment of the images on the two display modules based on the misalignment data, with process 1000 terminating thereafter.
  • FIG. 11A is a schematic diagram illustrating a first method for displaying an image on an ARVR device in accordance with an embodiment of the present disclosure.
  • a smartphone or other electronic device transmits a large image 1100A with misalignment corrections to the ARVR device.
  • the large image 1100A comprises a left image 1100AL and right image 1100AR.
  • the ARVR device splits the large image 1100A into a left image 1100AL and right image 1 100AR.
  • the ARVR device displays the left image 1100AL on the left display module and the right image 1 1 1 OOAR on the right display module of the ARVR device based on the misalignment corrections data provided by the smartphone.
  • FIG. 1 IB is a schematic diagram illustrating a second method for displaying an image on an ARVR device in accordance with an embodiment of the present disclosure.
  • a smartphone or other electronic device transmits a small image 1 100B with misalignment corrections to the ARVR device.
  • the ARVR device duplicates the small image 1100B into a left image 1 100BL and right image 1 1 OOBR.
  • the ARVR device displays the left image 1 100BL on the left display module and the right image 11 OOBR on the right display module of the ARVR device using the misalignment corrections data provided by the smartphone.
  • FIG. 12 is a schematic diagram illustrating an apparatus 1200 in accordance with an embodiment of the present disclosure.
  • the apparatus 1200 can be a smartphone or other electronic device capable of implementing the disclosed embodiments.
  • the apparatus 1200 includes a processor 1202, memory 1204, a display 1206, a cellular baseband 1208, the cellular transceiver 1210, a subscriber identifier module (SIM) card 1212, a Bluetooth component 1214, a wireless local area network (WLAN) component 1216, a camera 1218, a data storage unit 1220, and a power supply 1222.
  • SIM subscriber identifier module
  • WLAN wireless local area network
  • the apparatus 1200 can include additional components not depicted in this embodiment or may exclude one or more components depicted in this embodiment.
  • the apparatus 1200 may include more than one of a particular component such as having multiple data storage units 1220 or multiple cameras 1218.
  • the cameras 1218 can be other various types such as, but not limited to, a camera with depth sensor, a camera with monochrome sensor, a wide-angle (or super-wide) camera, and a telephoto camera.
  • the processor 1202 can be any type of processor capable of executing instructions for implementing the disclosed embodiments.
  • the processor 1202 can be a mobile application processor designed to support applications running in a mobile operating system environment.
  • the processor 1202 can be a system on a chip (SoC) that provides a self-contained operating environment that delivers all system capabilities needed to support a device's applications, including memory management, graphics processing, and multimedia decoding.
  • SoC system on a chip
  • the memory 1204 can be volatile memory such as, but not limited to, synchronous dynamic random-access memory (SDRAM).
  • SDRAM synchronous dynamic random-access memory
  • the data storage unit 1220 is any type of non-volatile memory.
  • the memory 1204 and data storage unit 1220 are not limited to any particular size.
  • the power supply 1222 can be a battery or power unit that receives power from an external power source, and provide power to the components of the apparatus 1200.
  • the cellular baseband modem 1208 is coupled to the cellular transceiver 1210.
  • the cellular transceiver 1210 is configured to transmit and receive radio frequency (RF) signals.
  • the RF signals can contain audio, video, and/or data.
  • the cellular baseband modem 1208 can include a modem processor and operating system (OS) components for processing the incoming RF signals to extract the audio, video, and/or data.
  • the cellular baseband modem 1208 can also be coupled to a SIM card 1212.
  • the SIM card 1212 contains a unique identification number for identifying itself to a mobile network, memory that stores personal data such as a contact list, and may include security measures that prevents operation of the SIM card 1212 from being used in other devices.
  • the Bluetooth component 1214 provides Bluetooth capability to the apparatus 1200.
  • Bluetooth is a wireless technology standard for exchanging data between fixed and mobile devices over short distances using short-wavelength ultra high frequency (UHF) radio waves.
  • the WLAN component 1216 enables the apparatus 1200 to be part of a wireless computer network that links two or more devices using wireless communications.
  • the apparatus 1200 can be modified with other various forms of wired or wireless communications.
  • FIG. 13 is a schematic diagram illustrating an apparatus 1300 according to an embodiment of the disclosure.
  • the apparatus 1300 includes means for performing alignment of an ARVR device as described herein.
  • the apparatus 1300 includes receiving means 1302, transmission means 1304, storage means 1306, an image capturing means 1308, and a processing means 1310.
  • the receiving means 1302, transmission means 1304, storage means 1306, image capturing means 1308, and processing means 1310 can be communicatively coupled via a communication bus or link 1320.
  • the receiving means 1302 can be configured to receive data or other input from a second device via a wired or wireless connection.
  • the transmission means 1304 can be configured to transmit data to a second device via a wired or wireless connection.
  • the storage means 1306 can be configured to store computer executable instructions and other data.
  • the image capturing means 1308 is configured to capture at least one image of the alignment image on the first display of an ARVR device and the alignment image on the second display of the ARVR device.
  • the processing means 1310 is configured to execute the instructions stored in the storage means 1306 to perform the methods disclosed.
  • the processing means 1310 can execute the instructions to display an alignment image on a first display and a second display of the ARVR device, analyze at least one image to determine misalignment data, and use the misalignment data to adjust a virtual image displayed on the first display and the second display of the ARVR device.
  • the disclosed embodiments may be a system, an apparatus, a method, and/or a computer program product at any possible technical detail level of integration
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure.
  • the computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method for performing alignment of an augmented reality/virtual reality (ARVR) device. The method displays an alignment image on a first display and on a second display of the ARVR device. The method captures at least one image of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR device using a single camera through a channel combiner. The method analyzes the at least one image to determine misalignment data of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR. The method uses the misalignment data to adjust a virtual image displayed on the first display and the second display of the ARVR device.

Description

Measurement and Calibration Method for Augmented Reality/Virtual Reality (AR/VR)
Binocular Alignment Errors
Technical Field
[0001] Embodiments of the present disclosure relate to the field of virtual reality, and in particular, to a method and apparatus for performing alignment of an augmented reality/virtual reality (ARVR) device.
BACKGROUND
[0002] Virtual reality (VR) is an artificial environment that is created with a mixture of interactive hardware and software. Augmented reality (AR) overlays virtual objects on the real- world environment, thus providing a composite view. AR/VR, or ARVR, has many uses including gaming, education, military, and healthcare. ARVR is typically viewed through an eye-covering headset device to fully immerse a user.
SUMMARY
[0003] A first aspect relates to a computer-implemented method for performing alignment of an augmented reality/virtual reality (ARYR) device. The method includes displaying an alignment image on a first display and on a second display of the ARYR device. The method captures at least one image of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR device using a single camera through a channel combiner. The method analyzes the at least one image to determine misalignment data of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR. The method uses the misalignment data to adjust a virtual image displayed on the first display and the second display of the ARVR device. [0004] In a first implementation form of the computer-implemented method according to the first aspect the single camera is a camera integrated in a smartphone that is communicatively coupled to the ARVR device.
[0005] In a second implementation form of the computer-implemented method according to the first aspect as such or any preceding implementation form of the first aspect, analyzing the at least one image to determine misalignment data of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR is performed by an application on the smartphone.
[0006] In a third implementation form of the computer-implemented method according to the first aspect as such or any preceding implementation form of the first aspect, using the misalignment data to adjust the virtual image displayed on the first display and the second display of the ARVR device comprises sending, by the smartphone, to the ARVR device a single image with misalignment corrections data for the first display and the second display of the ARVR device.
[0007] In a fourth implementation form of the computer-implemented method according to the first aspect as such or any preceding implementation form of the first aspect, using the misalignment data to adjust the virtual image displayed on the first display and the second display of the ARVR device comprises sending, by the smartphone, to the ARVR device a left alignment corrected image and right alignment corrected image.
[0008] In a fifth implementation form of the computer-implemented method according to the first aspect as such or any preceding implementation form of the first aspect, the smartphone that is communicatively coupled to the ARVR device via USB-C connector cable. [0009] In a sixth implementation form of the computer-implemented method according to the first aspect as such or any preceding implementation form of the first aspect, the channel combiner is an optical subassembly that is externally attached to the smartphone next to the camera of the smartphone.
[0010] In a seventh implementation form of the computer-implemented method according to the first aspect as such or any preceding implementation form of the first aspect, the channel combiner is an optical subassembly that is externally attached to the ARVR device and aligned to the first display and the second display of the ARVR device.
[0011] In an eighth implementation form of the computer-implemented method according to the first aspect as such or any preceding implementation form of the first aspect, the at least one image comprises a first image of the alignment image on the first display of the ARVR device and a second image of the alignment image on the second display of the ARVR.
[0012] In a ninth implementation form of the computer-implemented method according to the first aspect as such or any preceding implementation form of the first aspect, the at least one image is a single image that comprises a first image of the alignment image on the first display of the ARVR device overlapped with a second image of the alignment image on the second display of the ARVR.
[0013] In a tenth implementation form of the computer-implemented method according to the first aspect as such or any preceding implementation form of the first aspect, wherein the channel combiner comprises a first channel for capturing the first image of the alignment image and a second channel for capturing the second image of the alignment image. The channel combiner is configured to reflect the first channel using a mirror surface towards a beam splitting surface that splits the first channel and directs a portion of the first channel towards the camera. The channel combiner is further configured to direct the second channel towards the beam splitting surface that splits the second channel and directs a portion of the second channel towards the camera.
[0014] In an eleventh implementation form of the computer-implemented method according to the tenth implementation form of the first aspect, the mirror surface is a forty-five degree mirror surface and the beam splitting surface is a forty-five degree beam splitting surface.
[0015] In a twelfth implementation form of the computer-implemented method according to the first aspect as such or any preceding implementation form of the first aspect, the alignment image includes at least two reference coordinate points that are used for determining the misalignment data.
[0016] A second aspect relates to a calibration system for performing alignment of an augmented ARVR device. The calibration system includes a channel combiner and a mobile device. The mobile device includes a camera, a processor, and memory. The memory stores an application that includes computer executable instructions. The processor is configured to execute the computer executable instructions to display an alignment image to be displayed on a first display and on a second display of the ARVR device; capture, through the channel combiner, at least one image of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR device using the camera; analyze the at least one image to determine misalignment data of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR; and use the misalignment data to adjust a virtual image displayed on the first display and the second display of the ARVR device. [0017] In a first implementation form of the calibration system according to the second aspect as such, the mirror surface is a forty-five degree mirror surface and the beam splitting surface is a forty-five degree beam splitting surface.
[0018] In a second implementation form of the calibration system according to the second aspect as such or any preceding implementation form of the second aspect, the mobile device is a smartphone.
[0019] In a third implementation form of the calibration system according to the second aspect as such or any preceding implementation form of the second aspect, the processor is further configured to execute the computer executable instructions to send a single image with misalignment corrections data for the first display and the second display of the ARVR device for displaying the virtual image.
[0020] In a fourth implementation form of the calibration system according to the second aspect as such or any preceding implementation form of the second aspect, the processor is further configured to execute the computer executable instructions to send to the ARVR device a left alignment corrected image and right alignment corrected image for displaying the virtual image.
[0021] In a fifth implementation form of the calibration system according to the second aspect as such or any preceding implementation form of the second aspect, the mobile device is communicatively coupled to the ARVR device via a USB-C connector cable.
[0022] In a sixth implementation form of the calibration system according to the second aspect as such or any preceding implementation form of the second aspect, the channel combiner is an optical subassembly that is externally attached to the mobile device next to the camera of the mobile device. [0023] In a seventh implementation form of the calibration system according to the second aspect as such or any preceding implementation form of the second aspect, the channel combiner is an optical subassembly that is externally attached to the ARVR device and aligned to the first display and the second display of the ARVR device.
[0024] In an eighth implementation form of the calibration system according to the second aspect as such or any preceding implementation form of the second aspect, the at least one image comprises a first image of the alignment image on the first display of the ARVR device and a second image of the alignment image on the second display of the ARVR.
[0025] In a ninth implementation form of the calibration system according to the second aspect as such or any preceding implementation form of the second aspect, the at least one image is a single image that comprises a first image of the alignment image on the first display of the ARVR device overlapped with a second image of the alignment image on the second display of the ARVR.
[0026] In a tenth implementation form of the calibration system according to the second aspect as such or any preceding implementation form of the second aspect, the alignment image includes at least two reference coordinate points that are used for determining the misalignment data.
[0027] A third aspect relates to an apparatus or system comprising means for performing any of the preceding aspects as such or any preceding implementation form of any of the preceding aspects.
[0028] Additional details of the above aspects and other embodiments, as well as the advantages thereof, are further described in the Detailed Description. BRIEF DESCRIPTION OF THE DRAWINGS
[0029] For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
[0030] FIG. 1 is a schematic diagram illustrating a conventional measurement method for performing alignment of an ARVR device.
[0031] FIGS. 2A-2C are schematic diagrams illustrating user eye impact from the use of an ARVR device in accordance with an embodiment of the present disclosure.
[0032] FIG. 3 is a schematic diagram illustrating a measurement method for performing alignment of an ARVR device in accordance with an embodiment of the present disclosure.
[0033] FIG. 4A is a schematic diagram illustrating a first embodiment of an optical component of a channel combiner in accordance with an embodiment of the present disclosure.
[0034] FIG. 4B is a schematic diagram illustrating misalignment of the first embodiment of an optical component of a channel combiner in accordance with an embodiment of the present disclosure.
[0035] FIG. 4C is a schematic diagram illustrating a second embodiment of an optical component of a channel combiner in accordance with an embodiment of the present disclosure.
[0036] FIG. 5A is a schematic diagram illustrating a third embodiment of an optical component of a channel combiner in accordance with an embodiment of the present disclosure.
[0037] FIG. 5B is a schematic diagram illustrating a fourth embodiment of an optical component of a channel combiner in accordance with an embodiment of the present disclosure.
[0038] FIG. 6A is a schematic diagram illustrating a first image for performing alignment of an ARVR device in accordance with an embodiment of the present disclosure. [0039] FIG. 6B is a schematic diagram illustrating a second image for performing alignment of an ARVR device in accordance with an embodiment of the present disclosure.
[0040] FIG. 7 provides a schematic diagram illustrating misalignment of the first image and the second image in accordance with an embodiment of the present disclosure.
[0041] FIG. 8 is a schematic diagram illustrating a connection between an ARVR device and a smartphone in accordance with an embodiment of the present disclosure.
[0042] FIG. 9 is a flowchart illustrating a process for determining whether an ARVR device requires alignment in accordance with an embodiment of the present disclosure.
[0043] FIG. 10 is a flowchart illustrating a measurement alignment process in accordance with an embodiment of the present disclosure.
[0044] FIG. 11A is a schematic diagram illustrating a first method for displaying an image on an ARVR device in accordance with an embodiment of the present disclosure.
[0045] FIG. 1 IB is a schematic diagram illustrating a second method for displaying an image on an ARVR device in accordance with an embodiment of the present disclosure.
[0046] FIG. 12 is a schematic diagram illustrating an apparatus in accordance with an embodiment of the present disclosure.
[0047] FIG. 13 is a schematic diagram illustrating an apparatus according to an embodiment of the disclosure.
[0048] The illustrated figures are only exemplary and are not intended to assert or imply any limitation with regard to the environment, architecture, design, or process in which different embodiments may be implemented. Any optional component or steps are indicated using dash lines in the illustrated figures. DETAILED DESCRIPTION
[0049] It should be understood at the outset that although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
[0050] As used within the written disclosure and in the claims, the terms“including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to”. Unless otherwise indicated, as used throughout this document, “or” does not require mutual exclusivity, and the singular forms“a”,“an”, and“the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
[0051] A module or unit as referenced herein may comprise one or more hardware or electrical components such as electrical circuitry, processors, and memory that may be specially configured to perform a particular function. The memory may be volatile memory or non volatile memory that stores data such as, but not limited to, computer executable instructions, machine code, and other various forms of data. The module or unit may be configured to use the data to execute one or more instructions to perform one or more tasks. In certain instances, a unit may also refer to a particular set of functions, software instructions, or circuitry that is configured to perform a specific task.
[0052] To minimize potential user experience issues with ARVR, it is critical to adjust the two displays in any ARVR device and align them precisely. To perform a proper adjustment, the binocular alignment error must first be determined. The present disclosure provides several embodiments for improving the conventional method for determining the binocular alignment error, which uses two cameras to capture images from the two displays of the ARVR device to determine the alignment error. In contrast to the conventional method, the disclosed embodiments include an optical component for combining the two display channels and using only a single camera for capturing alignment error. In various embodiments, the optical component can be a prism, a reflective waveguide, or a diffractive waveguide. The alignment of the two channels of the test system can be either through high precision optics or through a calibration process. The camera can be one of the cameras integrated with a smartphone, or a standalone camera. The disclosed embodiments can also enable control and data analysis to be performed on smartphones, or on other computing devices.
[0053] Compared to the conventional method, the disclosed embodiments are simpler, less prone to environmental impact, and more cost effective. In addition, the disclosed embodiments provide a portable solution and can be provided as an accessory of ARVR products to enhance user experience.
[0054] FIG. 1 is a schematic diagram illustrating a conventional measurement method 100 for performing alignment of an ARVR device 1 10. In some embodiments, the conventional measurement method 100 uses a binocular alignment jig 120 to support or align the ARVR device 110. The ARVR device 110 can be any type of device that is capable of displaying a virtual reality environment or augmented reality environment to a user. For example, the ARVR device 110 can be a pair of glasses or an eye-covering headset device that fully immerses a user in the ARVR environment. The ARVR device 110 includes a display module 112 and a display module 114. The display module 112 is configured to display a first image that is presented to the left eye of a user. The display module 1 14 is configured to display a second image that is presented to the right eye of the user. When viewed together, the first image and the second image create the perception of a virtual image 102. The virtual image 102 can be a virtual reality image or an augmented reality image.
[0055] To minimize potential user experience issues, it is critical to align the two display modules 1 12, 1 14 in the ARVR device 110 precisely. To perform this alignment, the conventional measurement method 100 uses a binocular measurement jig 120. The binocular measurement jig 120 includes camera 122 and camera 124. Each of the cameras 122, 124 can include a camera lens coupled to the camera to enable zooming or focusing of an image. The camera 122 and camera 124 are each used to capture a frame or image of the virtual image 102 that is displayed on the display module 112 and the display module 114. The frame or image is then analyzed to determine an alignment error of the ARVR device 1 10.
[0056] Several shortcomings that are identified herein associated with the conventional measurement method 100 include the required use of a specialized binocular measurement jig 120 that is complicated, costly, and generally not available to a user. Additionally, the binocular measurement jig 120 is prone to environment impact. For example, calibration of the binocular measurement jig 120 needs to be repeated any time when the relative alignment between camera 122 and camera 124 changes. Accordingly, the disclosed embodiments provide an improved method and apparatus for improving the conventional method for determining the binocular alignment error as will be described herein.
[0057] FIGS. 2A-2C are a schematic diagrams illustrating user eye impact from the use of an ARVR device in accordance with an embodiment of the present disclosure. In particular, FIG. 2A is a schematic diagram illustrating a pair of normal eyes when the display modules of an ARVR device are properly aligned. FIG. 2B is a schematic diagram illustrating a pair of eyes that converge towards the center and FIG. 2C is a schematic diagram illustrating a pair of eyes that diverge away from the center. Both FIGS. 2B and 2C illustrate the potential impact of misaligned ARVR displays on a pair of eyes. The misalignment can cause user discomfort even with short term use of an ARVR device. The misalignment can also cause permanent damage to eye sight and depth perception from extended use of the ARVR device.
[0058] FIG. 3 is a schematic diagram illustrating a measurement method 300 for performing alignment of an ARVR device in accordance with an embodiment of the present disclosure. The measurement method 300 simplifies the hardware and software of the existing measurement system by using a channel combiner 302 and a single camera 304 for performing alignment of the ARVR device 1 10. The channel combiner 302 is a device that optically combines two channels into one. In certain embodiments, the channel combiner 302 is compact and rigid, and makes it possible to keep the system simpler, portable, and low cost. Various designs and configurations of several embodiments of the channel combiner 302 will be described herein. In some embodiments, the single camera 304 is a camera that is integrated within a smartphone or other portable devices that include a camera (e.g., a tablet or other mobile computing devices). By leveraging the camera of a smartphone, it is possible to deploy the measurement method 300 solution in the field if necessary.
[0059] FIG. 4A is a schematic diagram illustrating a first embodiment of an optical component 410 of a channel combiner in accordance with an embodiment of the present disclosure. In the depicted embodiment, the optical component 410 comprises of a mirror 412 and a beam splitter 416. The mirror 412 is configured to reflect an incoming optical beam 414 containing an image from the left display module of the ARVR device 1 10 to the beam splitter 416. The beam splitter 416 is also configured to receive an incoming optical beam 418 from the right display module of the ARVR device 110. The beam splitter 416 is an optical device which can split a light beam into two beams, which may or may not have the same optical power. For example, in the depicted embodiment, the beam splitter 416 is configured to split the optical beam 414 and the optical beam 418, and directs the split optical beam 414 and optical beam 418 towards the camera 304. In some embodiments, the mirror 412 surface is a forty-five degree mirror surface and the beam splitting surface 416 is a forty-five degree beam splitting surface. In some embodiments, the optical component 410 can include one or more anti-reflective (AR) surfaces 420.
[0060] FIG. 4B is a schematic diagram illustrating misalignment of the first embodiment of the optical component 410 of a channel combiner in accordance with an embodiment of the present disclosure. In particular, the disclosed embodiment illustrates that even when the optical component 410 of a channel combiner is not properly aligned (e.g., when a user does not properly align the channel combiner to the ARVR device 110 or to the camera 304/smartphone device), the optical component 410 can still direct the optical beam 414 and the optical beam 418 towards the camera 304 in the same manner as described in FIG. 4A. Thus, the disclosed embodiments are relatively, to a certain degree, measurement insensitive to misalignment among an ARVR product under test, a channel combiner, and a camera.
[0061] FIG. 4C is a schematic diagram illustrating a second embodiment of an optical component 430 of a channel combiner in accordance with an embodiment of the present disclosure. The configuration of the optical component 430 is same as the optical component 410 except that it is flipped so that the mirror 412 is configured to reflect the incoming optical beam 418 containing an image from the right display module of the ARVR device 1 10 to the beam splitter 416. The beam splitter 416 is also configured to receive the incoming optical beam 414 from the left display module of the ARVR device 110. The beam splitter 416 is configured to split the optical beam 414 and the optical beam 418. The beam splitter 416 directs the split optical beam 414 and optical beam 418 towards the camera 304.
[0062] FIG. 5A is a schematic diagram illustrating a third embodiment of an optical component 510 of a channel combiner in accordance with an embodiment of the present disclosure. In the depicted embodiment, the optical component 510 is an optical waveguide that can include one or more diffractive gratings 512. The diffractive grating 512 is an optical component with a periodic structure that splits and diffracts light into several beams travelling in different directions. For example, in the depicted embodiment, the optical beam 414 from the left display module of the ARVR device 110 enters the optical component 510 and hits a first diffractive grating 512. The first diffractive grating 512 splits and diffracts the optical beam 414. At least one of the diffracted optical beams 414 is reflected internally inside the waveguide towards a second diffractive grating 512 that directs the diffracted optical beam 414 towards the camera 304. The second diffractive grating 512 also receives the incoming optical beam 418 containing an image from the right display module of the ARVR device 1 10 and diffracts the optical beam 418 towards the camera 304.
[0063] FIG. 5B is a schematic diagram illustrating a fourth embodiment of an optical component 520 of a channel combiner in accordance with an embodiment of the present disclosure. The optical component 520 is the same as the optical component 510 except that it has a reversed configuration, where a first diffractive grating 512 diffracts the optical beam 418 containing an image from the right display module of the ARVR device 1 10. The optical beam 418 is diffracted internally within the waveguide towards a second diffractive grating 512, which directs the optical beam 418 and optical beam 414 towards the camera 304. [0064] FIG. 6A is a schematic diagram illustrating a first image 610 for performing alignment of an ARVR device in accordance with an embodiment of the present disclosure. FIG. 6B is a schematic diagram illustrating a second image 620 for performing alignment of an ARVR device in accordance with an embodiment of the present disclosure. In the depicted embodiment, the first image 610 and second image 620 include two coordinates A and B that are used to determine misalignment of the two display modules of the ARVR device. An additional number of coordinates may be used in various embodiments for determining misalignment and the two display modules. In an embodiment, the coordinates A and B for the first image 610 can be labeled as A: (xal, yal), B: (xbl, ybl). The coordinates A and B for the second image 620 can be labeled as A: (xa2, ya2), B: (xb2, yb2).
[0065] FIG. 7 provides a schematic diagram illustrating misalignment of the first image 610 and the second image 620 in accordance with an embodiment of the present disclosure. In an embodiment, the misalignment calculation for the horizontal (X) direction is determined using a formula xa2-xal and the misalignment calculation for the vertical (Y) direction is determined using a formula ya2-yal . In an embodiment, the rotation alignment can be determined using the formula: atan[(ybl-yal)/(xbl-xal)]-atan[(yb2-ya2)/(xb2-xa2)]. The horizontal, vertical, and rotation calculations are then used to align the first image 610 and the second image 620.
[0066] FIG. 8 is a schematic diagram illustrating a connection between an ARVR device 800 and a smartphone 820 in accordance with an embodiment of the present disclosure. As stated above, the disclosed embodiments use a single camera 304 and a channel combiner 302 for performing alignment of the display modules of the ARVR device 800. In various embodiments, the channel combiner 302 can be a separate module or device that is attached to either the ARVR device 800 or the smartphone 820. The camera 304 can be a standalone camera or a camera integrated into another device such as, but not limited to, the smartphone 820. The smartphone 820 is not limited to any particular make or model. The smartphone 820 can also be integrated with multiple cameras. In alternative embodiments, other electronic devices with integrated cameras may also be used. Non-limiting examples include smart glasses, smart watches, laptops, and tablets with built in camera(s).
[0067] In an embodiment, an application may be loaded onto the smartphone 820, or other electronic device, for performing alignment of the ARVR device 800. The application may be configured to select any of the cameras or a specific camera (e.g., primary camera, wide-angle camera, etc.) of the smartphone 820 for performing alignment of the ARVR device 800. In an embodiment, the selection of the camera may be based on an aspect ratio or picture quality of the image being displayed on the display modules of the ARVR device 800. As shown in the depicted embodiment, the smartphone 820 may be communicatively coupled to the ARVR device 800 via a wired connection using any type of wired communication protocol (e.g., universal serial bus (USB), Tightning, microUSB, and USB-C). Alternatively, in some embodiments, the smartphone 820 may be communicatively coupled to the ARVR device 800 via a wireless connection such as, Bluetooth, WI-FI, or other wireless technologies.
[0068] FIG. 9 is a flowchart illustrating a process 900 for determining whether an ARVR device requires alignment in accordance with an embodiment of the present disclosure. The process 900 may be performed by smartphone 820 or other electronic devices. The process 900 begins at step 902 by establishing a connection with the ARVR device. A connection can be established using a mobile app or application that is configured to communicate with the ARVR device. Once the connection is established, the process 900, at step 904, determines whether the ARVR device requires an alignment calibration process. In certain embodiments, an alignment calibration process can be required after a predetermined number of uses, after a certain time period, if the ARVR device is new, if the ARVR device is connected to a new image source, or if it is detected that the ARVR device has fallen or had a hard impact that may affect the alignment of the images of the display modules. If the process 900 determines at step 904 that an alignment calibration process is required, the process 900, at step 906, performs a measurement alignment process for aligning the display modules of the ARVR device as described below in FIG. 10. Otherwise, the process 900, at step 908, displays ARVR content on ARVR device the display modules of the ARVR device, with process 900 terminating thereafter. In some embodiments, the ARVR content may be provided by the smartphone 820 or other electronic device, or may be directly downloaded or installed on the ARVR device.
[0069] FIG. 10 is a flowchart illustrating a measurement alignment process 1000 in accordance with an embodiment of the present disclosure. As stated above, the process 1000 may be performed by smartphone 820 or other electronic devices communicatively coupled to the ARVR device. The process 1000 begins, at step 1002, by displaying two alignment patterns on the two displays modules of the ARVR device. The process 1000, at step 1004, uses a single camera to take at least one image from each of the display modules through a channel combiner. At step 1006, the process 1000 analyzes the images to determine misalignment data of the two display modules. One embodiment for determining the misalignment data is described with reference to FIGS. 6A-6B and FIG. 7. The process 1000, at step 1008, adjusts alignment of the images on the two display modules based on the misalignment data, with process 1000 terminating thereafter.
[0070] FIG. 11A is a schematic diagram illustrating a first method for displaying an image on an ARVR device in accordance with an embodiment of the present disclosure. In the depicted embodiment, a smartphone or other electronic device transmits a large image 1100A with misalignment corrections to the ARVR device. The large image 1100A comprises a left image 1100AL and right image 1100AR. The ARVR device splits the large image 1100A into a left image 1100AL and right image 1 100AR. The ARVR device displays the left image 1100AL on the left display module and the right image 1 1 OOAR on the right display module of the ARVR device based on the misalignment corrections data provided by the smartphone.
[0071] FIG. 1 IB is a schematic diagram illustrating a second method for displaying an image on an ARVR device in accordance with an embodiment of the present disclosure. In the depicted embodiment, a smartphone or other electronic device transmits a small image 1 100B with misalignment corrections to the ARVR device. The ARVR device duplicates the small image 1100B into a left image 1 100BL and right image 1 1 OOBR. The ARVR device displays the left image 1 100BL on the left display module and the right image 11 OOBR on the right display module of the ARVR device using the misalignment corrections data provided by the smartphone.
[0072] FIG. 12 is a schematic diagram illustrating an apparatus 1200 in accordance with an embodiment of the present disclosure. The apparatus 1200 can be a smartphone or other electronic device capable of implementing the disclosed embodiments. In the depicted embodiment, the apparatus 1200 includes a processor 1202, memory 1204, a display 1206, a cellular baseband 1208, the cellular transceiver 1210, a subscriber identifier module (SIM) card 1212, a Bluetooth component 1214, a wireless local area network (WLAN) component 1216, a camera 1218, a data storage unit 1220, and a power supply 1222. In various embodiments, the apparatus 1200 can include additional components not depicted in this embodiment or may exclude one or more components depicted in this embodiment. Additionally, although only a singular form of a component is described herein, the apparatus 1200 may include more than one of a particular component such as having multiple data storage units 1220 or multiple cameras 1218. The cameras 1218 can be other various types such as, but not limited to, a camera with depth sensor, a camera with monochrome sensor, a wide-angle (or super-wide) camera, and a telephoto camera.
[0073] The processor 1202 can be any type of processor capable of executing instructions for implementing the disclosed embodiments. As an example, the processor 1202 can be a mobile application processor designed to support applications running in a mobile operating system environment. The processor 1202 can be a system on a chip (SoC) that provides a self-contained operating environment that delivers all system capabilities needed to support a device's applications, including memory management, graphics processing, and multimedia decoding. The memory 1204 can be volatile memory such as, but not limited to, synchronous dynamic random-access memory (SDRAM). The data storage unit 1220 is any type of non-volatile memory. The memory 1204 and data storage unit 1220 are not limited to any particular size. The power supply 1222 can be a battery or power unit that receives power from an external power source, and provide power to the components of the apparatus 1200.
[0074] In the depicted embodiment, the cellular baseband modem 1208 is coupled to the cellular transceiver 1210. The cellular transceiver 1210 is configured to transmit and receive radio frequency (RF) signals. The RF signals can contain audio, video, and/or data. The cellular baseband modem 1208 can include a modem processor and operating system (OS) components for processing the incoming RF signals to extract the audio, video, and/or data. The cellular baseband modem 1208 can also be coupled to a SIM card 1212. The SIM card 1212 contains a unique identification number for identifying itself to a mobile network, memory that stores personal data such as a contact list, and may include security measures that prevents operation of the SIM card 1212 from being used in other devices.
[0075] The Bluetooth component 1214 provides Bluetooth capability to the apparatus 1200. Bluetooth is a wireless technology standard for exchanging data between fixed and mobile devices over short distances using short-wavelength ultra high frequency (UHF) radio waves. The WLAN component 1216 enables the apparatus 1200 to be part of a wireless computer network that links two or more devices using wireless communications. The apparatus 1200 can be modified with other various forms of wired or wireless communications.
[0076] FIG. 13 is a schematic diagram illustrating an apparatus 1300 according to an embodiment of the disclosure. The apparatus 1300 includes means for performing alignment of an ARVR device as described herein. In an embodiment, the apparatus 1300 includes receiving means 1302, transmission means 1304, storage means 1306, an image capturing means 1308, and a processing means 1310. The receiving means 1302, transmission means 1304, storage means 1306, image capturing means 1308, and processing means 1310 can be communicatively coupled via a communication bus or link 1320.
[0077] In certain embodiments, the receiving means 1302 can be configured to receive data or other input from a second device via a wired or wireless connection. The transmission means 1304 can be configured to transmit data to a second device via a wired or wireless connection. The storage means 1306 can be configured to store computer executable instructions and other data. The image capturing means 1308 is configured to capture at least one image of the alignment image on the first display of an ARVR device and the alignment image on the second display of the ARVR device. [0078] In certain embodiments, the processing means 1310 is configured to execute the instructions stored in the storage means 1306 to perform the methods disclosed. For example, the processing means 1310 can execute the instructions to display an alignment image on a first display and a second display of the ARVR device, analyze at least one image to determine misalignment data, and use the misalignment data to adjust a virtual image displayed on the first display and the second display of the ARVR device.
[0079] The disclosed embodiments may be a system, an apparatus, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
[0080] While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
[0081] In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein.

Claims

CLAIMS What is claimed is:
1. A method for performing alignment of an augmented reality/virtual reality (ARVR) device, the method comprising:
displaying an alignment image on a first display of the ARVR device;
displaying the alignment image on a second display of the ARVR device;
capturing at least one image of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR device using a single camera through a channel combiner;
analyzing the at least one image to determine misalignment data of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR; and
using the misalignment data to adjust a virtual image displayed on the first display and the second display of the ARVR device.
2. The method of claim 1, wherein the single camera is a camera of a smartphone that is communicatively coupled to the ARVR device.
3. The method according to claim 2, wherein analyzing the at least one image to determine misalignment data of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR is performed by an application on the smartphone.
4. The method according to claim 3, wherein using the misalignment data to adjust the virtual image displayed on the first display and the second display of the ARVR device comprises sending, by the smartphone, to the ARVR device a single image with misalignment corrections data for the first display and the second display of the ARVR device.
5. The method according to claim 3, wherein using the misalignment data to adjust the virtual image displayed on the first display and the second display of the ARVR device comprises sending, by the smartphone, to the ARVR device a left alignment corrected image and right alignment corrected image.
6. The method according to any of claims 4-5, wherein the smartphone is communicatively coupled to the ARVR device via a universal serial bus type-C (USB-C) connector cable.
7. The method according to any of claims 1-6, wherein the channel combiner is an optical subassembly that is externally attached to the smartphone next to the camera of the smartphone.
8. The method according to any of claims 1-6, wherein the channel combiner is an optical subassembly that is externally attached to the ARVR device and aligned to the first display and the second display of the ARVR device.
9. The method according to any of claims 1-8, wherein the at least one image comprises a first image of the alignment image on the first display of the ARVR device and a second image of the alignment image on the second display of the ARVR.
10. The method according to any of claims 1-8, wherein the at least one image is a single image that comprises a first image of the alignment image on the first display of the ARVR device overlapped with a second image of the alignment image on the second display of the
ARVR.
11. The method according to any of claims 9-10, wherein the channel combiner comprises a first channel for capturing the first image of the alignment image and a second channel for capturing the second image of the alignment image, the channel combiner configured to reflect the first channel using a mirror surface towards a beam splitting surface that splits the first channel and directs a portion of the first channel towards the camera, the channel combiner further configured to direct the second channel towards the beam splitting surface that splits the second channel and directs a portion of the second channel towards the camera.
12. The method of claim 11, wherein the mirror surface is a forty-five degree mirror surface and the beam splitting surface is a forty-five degree beam splitting surface.
13. The method according to any of claims 1-12, wherein the alignment image includes at least two reference coordinate points that are used for determining the misalignment data.
14. A calibration system for performing alignment of an augmented reality/virtual reality (ARVR) device, the calibration system comprising:
a channel combiner; and
a mobile device comprising a camera, a processor, and memory, wherein the memory stores an application that includes computer executable instructions, the processor is configured to execute the computer executable instructions to:
display an alignment image to be displayed on a first display and on a second display of the ARVR device;
capture, through the channel combiner, at least one image of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR device using the camera; analyze the at least one image to determine misalignment data of the alignment image on the first display of the ARVR device and the alignment image on the second display of the ARVR; and
use the misalignment data to adjust a virtual image displayed on the first display and the second display of the ARVR device.
15. The calibration system according to claim 14, wherein the channel combiner comprises a first channel for capturing a first image of a alignment image and a second channel for capturing a second image of the alignment image, the channel combiner configured to reflect the first channel using a mirror surface towards a beam splitting surface that splits the first channel and directs a portion of the first channel towards the camera of the mobile device, the channel combiner further configured to direct the second channel towards the beam splitting surface that splits the second channel and directs a portion of the second channel towards the camera of the mobile device.
16. The calibration system according to any of claims 14-15, wherein the mirror surface is a forty-five degree mirror surface and the beam splitting surface is a forty-five degree beam splitting surface.
17. The calibration system according to any of claims 14-16, wherein the mobile device is a smartphone.
18. The calibration system according to any of claims 14-17, wherein the processor is further configured to execute the computer executable instructions to send a single image with misalignment corrections data for the first display and the second display of the ARVR device for displaying the virtual image.
19. The calibration system according to any of claims 14-17, wherein the processor is further configured to execute the computer executable instructions to send to the ARVR device a left alignment corrected image and right alignment corrected image for displaying the virtual image.
20. The calibration system according to any of claims 14-19, wherein the mobile device is communicatively coupled to the ARVR device via a universal serial bus type-C (USB-C) connector cable.
21. The calibration system according to any of claims 14-20, wherein the channel combiner is an optical subassembly that is externally attached to the mobile device next to the camera of the mobile device.
22. The calibration system according to any of claims 14-20, wherein the channel combiner is an optical subassembly that is externally attached to the ARVR device and aligned to the first display and the second display of the ARVR device.
23. The calibration system according to any of claims 14-22, wherein the at least one image comprises a first image of the alignment image on the first display of the ARVR device and a second image of the alignment image on the second display of the ARVR.
24. The calibration system according to any of claims 14-22, wherein the at least one image is a single image that comprises a first image of the alignment image on the first display of the ARVR device overlapped with a second image of the alignment image on the second display of the ARVR.
25. The calibration system according to any of claims 14-24, wherein the alignment image includes at least two reference coordinate points that are used for determining the misalignment data.
PCT/US2019/027905 2019-04-17 2019-04-17 Measurement and calibration method for augmented reality/virtual reality (ar/vr) binocular alignment errors technical field WO2020214162A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2019/027905 WO2020214162A1 (en) 2019-04-17 2019-04-17 Measurement and calibration method for augmented reality/virtual reality (ar/vr) binocular alignment errors technical field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2019/027905 WO2020214162A1 (en) 2019-04-17 2019-04-17 Measurement and calibration method for augmented reality/virtual reality (ar/vr) binocular alignment errors technical field

Publications (1)

Publication Number Publication Date
WO2020214162A1 true WO2020214162A1 (en) 2020-10-22

Family

ID=66476831

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/027905 WO2020214162A1 (en) 2019-04-17 2019-04-17 Measurement and calibration method for augmented reality/virtual reality (ar/vr) binocular alignment errors technical field

Country Status (1)

Country Link
WO (1) WO2020214162A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130187943A1 (en) * 2012-01-23 2013-07-25 David D. Bohn Wearable display device calibration
US20140375681A1 (en) * 2013-06-24 2014-12-25 Steven John Robbins Active binocular alignment for near eye displays
US20190019341A1 (en) * 2015-07-06 2019-01-17 Seiko Epson Corporation Head-mounted display device and computer program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130187943A1 (en) * 2012-01-23 2013-07-25 David D. Bohn Wearable display device calibration
US20140375681A1 (en) * 2013-06-24 2014-12-25 Steven John Robbins Active binocular alignment for near eye displays
US20190019341A1 (en) * 2015-07-06 2019-01-17 Seiko Epson Corporation Head-mounted display device and computer program

Similar Documents

Publication Publication Date Title
US10242504B2 (en) Head-mounted display device and computer program
EP3931612B1 (en) Active display alignment for multi-display device
CN114270246A (en) View field spliced waveguide display
CN110275297B (en) Head-mounted display device, display control method, and recording medium
US20170161956A1 (en) Head-mounted display device and computer program
US20170176750A1 (en) Display apparatus
US11763425B2 (en) High resolution time-of-flight depth imaging
KR20210052570A (en) Determination of separable distortion mismatch
US20230221568A1 (en) Calibration and use of eye tracking
WO2022182773A1 (en) Waveguide display with multiple monochromatic projectors
CN108805984B (en) Display system and image display method
WO2022182784A1 (en) Staircase in-coupling for waveguide display
EP4298474A1 (en) Thin liquid crystal stack for polarization conversion
EP4100785A1 (en) Polarization-multiplexed optics for head-mounted display systems
WO2020214162A1 (en) Measurement and calibration method for augmented reality/virtual reality (ar/vr) binocular alignment errors technical field
CN104239877B (en) The method and image capture device of image procossing
CN115834860A (en) Background blurring method, apparatus, device, storage medium, and program product
CN110012283A (en) A kind of adjustment method and debugging system of 3 d display
US11733521B2 (en) Heterogeneous layered volume Bragg grating waveguide architecture
CN111930236B (en) Equipment control method and device, storage medium and electronic equipment
KR20230070220A (en) Switch leakage compensation for global illumination
CN104935803A (en) Image acquisition method and electronic equipment
CN116107421A (en) Display method and electronic equipment
US20240179284A1 (en) Dual-path disparity sensor
US20240192427A1 (en) Reflector orientation of geometrical and mixed waveguide for reducing grating conspicuity

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19723252

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19723252

Country of ref document: EP

Kind code of ref document: A1