WO2023031633A1 - Étalonnage en ligne basé sur la mécanique de corps déformable - Google Patents

Étalonnage en ligne basé sur la mécanique de corps déformable Download PDF

Info

Publication number
WO2023031633A1
WO2023031633A1 PCT/GR2021/000057 GR2021000057W WO2023031633A1 WO 2023031633 A1 WO2023031633 A1 WO 2023031633A1 GR 2021000057 W GR2021000057 W GR 2021000057W WO 2023031633 A1 WO2023031633 A1 WO 2023031633A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
deformation
sensor
eye
computer
Prior art date
Application number
PCT/GR2021/000057
Other languages
English (en)
Inventor
Aaron SCHMITZ
Benjamin Elliott Tunberg ROGOZA
Sharvil Shailesh TALATI
Anastasios Mourikis
Elmer Cajigas
Kevin ECKENHOFF
Original Assignee
Facebook Technologies, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Facebook Technologies, Llc filed Critical Facebook Technologies, Llc
Priority to PCT/GR2021/000057 priority Critical patent/WO2023031633A1/fr
Publication of WO2023031633A1 publication Critical patent/WO2023031633A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0176Head mounted characterised by mechanical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0181Adaptation to the pilot/driver

Definitions

  • FIG. 1 is an illustration of exemplary augmented-reality glasses that may be used in connection with embodiments of this disclosure.
  • FIG. 2 is an illustration of an exemplary virtual-reality headset that may be used in connection with embodiments of this disclosure.
  • FIG. 3 an illustration of an exemplary system that incorporates an eye-tracking subsystem capable of tracking a user's eye(s).
  • FIG. 4 is a more detailed illustration of various aspects of the eye-tracking subsystem illustrated in FIG. 3.
  • FIG. 5 is an illustration of an exemplary system for online calibration based on deformable body mechanics.
  • FIG. 6 is a flow diagram of an exemplary method for online calibration based on deformable body mechanics.
  • FIG. 7 is a flow diagram of an additional exemplary method for online calibration based on deformable body mechanics.
  • FIG. 8 is an illustration of additional exemplary augmented-reality glasses that may be used in connection with embodiments of this disclosure.
  • FIG. 9 is an illustration of an exemplary deformation caused by torsion performed on exemplary augmented-reality glasses that may be used in connection with embodiments of this disclosure.
  • FIG. 10 is an illustration of an exemplary deformation caused by flexing performed on exemplary augmented-reality glasses that may be used in connection with embodiments of this disclosure.
  • FIG. 11 is an illustration of additional exemplary augmented-reality glasses that may be used in connection with embodiments of this disclosure.
  • FIG. 12 is an illustration of additional exemplary augmented-reality glasses that may be used in connection with embodiments of this disclosure.
  • an artificial-reality system will include a head-mounted display to deliver images to a user's eyes.
  • conventional artificial-reality systems may also employ techniques during image display to account for a user's head motions and eyetracking to ensure that images are accurately displayed to the user's eyes.
  • Traditional artificial-reality systems may be calibrated at the factory to ensure that the various sensors and display components are optimized to provide a suitable userexperience.
  • the configuration settings and locations on the head-mounted display of the various sensors and display components may be pre-set and established for the life of the product.
  • the conditions during actual use of the head-mounted display may vary from those of the factory, leading to inaccurate video display and an unsatisfactory user experience.
  • some head-mounted displays and other artificial-reality systems use certain form-factors, such as a form-factor resembling traditional-looking glasses, that may be prone to deformation during use.
  • a pair of augmented-reality glasses may be subject to deformations caused by torsion or flexing during use, further complicating the presentation of an immersive user experience as sensors and display components may move relative to each other and may not stay in alignment during deformation, potentially causing user discomfort or inaccurate depiction of objects in an image.
  • the present disclosure is generally directed to systems and methods for online calibration based on deformable body mechanics.
  • embodiments of the present disclosure may detect deformation of a frame, such as a wearable frame of a head-mounted display for an artificial-reality system, determine how the deformation changes a position of a device coupled to the frame, and compensate for the deformation of the frame by accounting for the changed position of the device when performing various operations such as displaying an image or performing eye-tracking.
  • embodiments of the present disclosure may enhance a user's artificial-reality experience as displayed images compensate for changing positions of devices of the frame and as eye-tracking sensors better track a user's eye during real-time use.
  • consumers may have more of a variety of artificial-reality system form factors to choose from as designs that would otherwise be unusable (due to having frames and other support structures subject to deformation) may become available.
  • the embodiments of the present disclosure may calibrate the frame and its devices in real-time by accounting for frame deformation and/or other dynamic factors as a frame is worn by a user.
  • the systems and methods described herein may improve the functioning of a computer itself by improving the computer's ability to approximate positions of devices on a deformable frame in real-time to accurately perform operations that depend on device position, such as displaying digital images and performing eye-tracking operations.
  • the systems and methods described herein may additionally improve the functioning of artificialreality technologies by enhancing the creation of immersive artificial-reality environments that remain functional in real-world use of artificial-reality systems.
  • the present disclosure will provide detailed descriptions of embodiments related to (1) detecting deformation of a wearable frame and compensating for a changed position of a component of a display system— housed by the wearable frame— when processing an image to be displayed to a user of the wearable frame and (2) using simultaneous location and mapping (“SLAM”) data gathered from a SLAM sensor coupled to a frame to determine how deformation of the frame changes the position of a device relative to an environment.
  • SLAM simultaneous location and mapping
  • FIGS. 3 and 4 depict an exemplary system that incorporates an eyetracking subsystem
  • FIG. 5 depicts an exemplary system for online calibration based on deformable body mechanics.
  • FIGS. 6 and 7 depict methods for online calibration based on deformable body mechanics.
  • FIGS. 9 and 10 examples of deformation actions in FIGS. 9 and 10, such as those caused by torque and flexing, on such glasses, and further embodiments of exemplary augmented-reality glasses in FIGS. 11 and 12.
  • Embodiments of the present disclosure may include or be implemented in conjunction with various types of artificial-reality systems.
  • Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, for example, a virtual reality, an augmented reality, a mixed reality, a hybrid reality, or some combination and/or derivative thereof.
  • Artificial-reality content may include completely computer-generated content or computer-generated content combined with captured (e.g., real-world) content.
  • the artificial-reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional (3D) effect to the viewer).
  • artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, for example, create content in an artificial reality and/or are otherwise used in (e.g., to perform activities in) an artificial reality.
  • Artificial-reality systems may be implemented in a variety of different form factors and configurations. Some artificial-reality systems may be designed to work without neareye displays (NEDs). Other artificial-reality systems may include an NED that also provides visibility into the real world (such as, e.g., augmented-reality system 100 in FIG. 1) or that visually immerses a user in an artificial reality (such as, e.g., virtual-reality system 200 in FIG. 2). While some artificial-reality devices may be self-contained systems, other artificial-reality devices may communicate and/or coordinate with external devices to provide an artificial-reality experience to a user. Examples of such external devices include handheld controllers, mobile devices, desktop computers, devices worn by a user, devices worn by one or more other users, and/or any other suitable external system.
  • augmented-reality system 100 may include an eyewear device 102 with a frame 110 configured to hold a left display device 115(A) and a right display device 115(B) in front of a user's eyes.
  • Display devices 115(A) and 115(B) may act together or independently to present an image or series of images to a user.
  • augmented-reality system 100 includes two displays, embodiments of this disclosure may be implemented in augmented- reality systems with a single NED or more than two NEDs.
  • augmented-reality system 100 may include one or more sensors, such as sensor 140.
  • Sensor 140 may generate measurement signals in response to motion of augmented-reality system 100 and may be located on substantially any portion of frame 110.
  • Sensor 140 may represent one or more of a variety of different sensing mechanisms, such as a position sensor, an inertial measurement unit (IMU), a depth camera assembly, a structured light emitter and/or detector, or any combination thereof.
  • IMU inertial measurement unit
  • augmented-reality system 100 may or may not include sensor 140 or may include more than one sensor.
  • the IMU may generate calibration data based on measurement signals from sensor 140.
  • Examples of sensor 140 may include, without limitation, accelerometers, gyroscopes, magnetometers, other suitable types of sensors that detect motion, sensors used for error correction of the IMU, or some combination thereof.
  • augmented-reality system 100 may also include a microphone array with a plurality of acoustic transducers 120(A)-120(J), referred to collectively as acoustic transducers 120.
  • Acoustic transducers 120 may represent transducers that detect air pressure variations induced by sound waves.
  • Each acoustic transducer 120 may be configured to detect sound and convert the detected sound into an electronic format (e.g., an analog or digital format).
  • acoustic transducers 120(A) and 120(B) which may be designed to be placed inside a corresponding ear of the user, acoustic transducers 120(C), 120(D), 120(E), 120(F), 120(G), and 120(H), which may be positioned at various locations on frame 110, and/or acoustic transducers 120(1) and 120(J), which may be positioned on a corresponding neckband 105.
  • acoustic transducers 120(A)-(J) may be used as output transducers (e.g., speakers).
  • acoustic transducers 120(A) and/or 120(B) may be earbuds or any other suitable type of headphone or speaker.
  • the configuration of acoustic transducers 120 of the microphone array may vary. While augmented-reality system 100 is shown in FIG. 1 as having ten acoustic transducers 120, the number of acoustic transducers 120 may be greater or less than ten. In some embodiments, using higher numbers of acoustic transducers 120 may increase the amount of audio information collected and/or the sensitivity and accuracy of the audio information. In contrast, using a lower number of acoustic transducers 120 may decrease the computing power required by an associated controller 150 to process the collected audio information. In addition, the position of each acoustic transducer 120 of the microphone array may vary. For example, the position of an acoustic transducer 120 may include a defined position on the user, a defined coordinate on frame 110, an orientation associated with each acoustic transducer 120, or some combination thereof.
  • Acoustic transducers 120(A) and 120(B) may be positioned on different parts of the user's ear, such as behind the pinna, behind the tragus, and/or within the auricle or fossa. Or, there may be additional acoustic transducers 120 on or surrounding the ear in addition to acoustic transducers 120 inside the ear canal. Having an acoustic transducer 120 positioned next to an ear canal of a user may enable the microphone array to collect information on how sounds arrive at the ear canal.
  • augmented-reality device 100 may simulate binaural hearing and capture a 3D stereo sound field around about a user's head.
  • acoustic transducers 120(A) and 120(B) may be connected to augmented-reality system 100 via a wired connection 130, and in other embodiments acoustic transducers 120(A) and 120(B) may be connected to augmented-reality system 100 via a wireless connection (e.g., a Bluetooth connection).
  • acoustic transducers 120(A) and 120(B) may not be used at all in conjunction with augmented-reality system 100.
  • Acoustic transducers 120 on frame 110 may be positioned in a variety of different ways, including along the length of the temples, across the bridge, above or below display devices 115(A) and 115(B), or some combination thereof. Acoustic transducers 120 may also be oriented such that the microphone array is able to detect sounds in a wide range of directions surrounding the user wearing the augmented-reality system 100. In some embodiments, an optimization process may be performed during manufacturing of augmented- reality system 100 to determine relative positioning of each acoustic transducer 120 in the microphone array.
  • augmented-reality system 100 may include or be connected to an external device (e.g., a paired device), such as neckband 105.
  • an external device e.g., a paired device
  • Neckband 105 generally represents any type or form of paired device.
  • the following discussion of neckband 105 may also apply to various other paired devices, such as charging cases, smart watches, smart phones, wrist bands, other wearable devices, hand-held controllers, tablet computers, laptop computers, other external compute devices, etc.
  • neckband 105 may be coupled to eyewear device 102 via one or more connectors.
  • the connectors may be wired or wireless and may include electrical and/or non-electrical (e.g., structural) components.
  • eyewear device 102 and neckband 105 may operate independently without any wired or wireless connection between them. While FIG. 1 illustrates the components of eyewear device 102 and neckband 105 in example locations on eyewear device 102 and neckband 105, the components may be located elsewhere and/or distributed differently on eyewear device 102 and/or neckband 105. In some embodiments, the components of eyewear device 102 and neckband 105 may be located on one or more additional peripheral devices paired with eyewear device 102, neckband 105, or some combination thereof.
  • Pairing external devices such as neckband 105
  • augmented-reality eyewear devices may enable the eyewear devices to achieve the form factor of a pair of glasses while still providing sufficient battery and computation power for expanded capabilities.
  • Some or all of the battery power, computational resources, and/or additional features of augmented- reality system 100 may be provided by a paired device or shared between a paired device and an eyewear device, thus reducing the weight, heat profile, and form factor of the eyewear device overall while still retaining desired functionality.
  • neckband 105 may allow components that would otherwise be included on an eyewear device to be included in neckband 105 since users may tolerate a heavier weight load on their shoulders than they would tolerate on their heads.
  • Neckband 105 may also have a larger surface area over which to diffuse and disperse heat to the ambient environment. Thus, neckband 105 may allow for greater battery and computation capacity than might otherwise have been possible on a stand-alone eyewear device. Since weight carried in neckband 105 may be less invasive to a user than weight carried in eyewear device 102, a user may tolerate wearing a lighter eyewear device and carrying or wearing the paired device for greater lengths of time than a user would tolerate wearing a heavy standalone eyewear device, thereby enabling users to more fully incorporate artificial-reality environments into their day-to-day activities.
  • Neckband 105 may be communicatively coupled with eyewear device 102 and/or to other devices. These other devices may provide certain functions (e.g., tracking, localizing, depth mapping, processing, storage, etc.) to augmented-reality system 100.
  • neckband 105 may include two acoustic transducers (e.g., 120(1) and 120(J)) that are part of the microphone array (or potentially form their own microphone subarray).
  • Neckband 105 may also include a controller 125 and a power source 135.
  • Acoustic transducers 120(1) and 120(J) of neckband 105 may be configured to detect sound and convert the detected sound into an electronic format (analog or digital).
  • acoustic transducers 120(1) and 120(J) may be positioned on neckband 105, thereby increasing the distance between the neckband acoustic transducers 120(1) and 120(J) and other acoustic transducers 120 positioned on eyewear device 102.
  • increasing the distance between acoustic transducers 120 of the microphone array may improve the accuracy of beamforming performed via the microphone array.
  • the determined source location of the detected sound may be more accurate than if the sound had been detected by acoustic transducers 120(D) and 120(E).
  • Controller 125 of neckband 105 may process information generated by the sensors on neckband 105 and/or augmented-reality system 100. For example, controller 125 may process information from the microphone array that describes sounds detected by the microphone array. For each detected sound, controller 125 may perform a direction-of-arrival (DOA) estimation to estimate a direction from which the detected sound arrived at the microphone array. As the microphone array detects sounds, controller 125 may populate an audio data set with the information. In embodiments in which augmented-reality system 100 includes an inertial measurement unit, controller 125 may compute all inertial and spatial calculations from the IMU located on eyewear device 102.
  • DOA direction-of-arrival
  • a connector may convey information between augmented-reality system 100 and neckband 105 and between augmented-reality system 100 and controller 125.
  • the information may be in the form of optical data, electrical data, wireless data, or any other transmittable data form. Moving the processing of information generated by augmented-reality system 100 to neckband 105 may reduce weight and heat in eyewear device 102, making it more comfortable to the user.
  • Power source 135 in neckband 105 may provide power to eyewear device 102 and/or to neckband 105.
  • Power source 135 may include, without limitation, lithium ion batteries, lithium-polymer batteries, primary lithium batteries, alkaline batteries, or any other form of power storage.
  • power source 135 may be a wired power source.
  • Including power source 135 on neckband 105 instead of on eyewear device 102 may help better distribute the weight and heat generated by power source 135.
  • some artificial-reality systems may, instead of blending an artificial reality with actual reality, substantially replace one or more of a user's sensory perceptions of the real world with a virtual experience.
  • Virtual-reality system 200 may include a front rigid body 202 and a band 204 shaped to fit around a user's head.
  • Virtual-reality system 200 may also include output audio transducers 206(A) and 206(B).
  • front rigid body 202 may include one or more electronic elements, including one or more electronic displays, one or more inertial measurement units (IMUs), one or more tracking emitters or detectors, and/or any other suitable device or system for creating an artificial-reality experience.
  • IMUs inertial measurement units
  • Artificial-reality systems may include a variety of types of visual feedback mechanisms.
  • display devices in augmented-reality system 100 and/or virtual-reality system 200 may include one or more liquid crystal displays (LCDs), light emitting diode (LED) displays, organic LED (OLED) displays, digital light project (DLP) micro-displays, liquid crystal on silicon (LCoS) micro-displays, and/or any other suitable type of display screen.
  • LCDs liquid crystal displays
  • LED light emitting diode
  • OLED organic LED
  • DLP digital light project
  • LCD liquid crystal on silicon
  • These artificial-reality systems may include a single display screen for both eyes or may provide a display screen for each eye, which may allow for additional flexibility for varifocal adjustments or for correcting a user's refractive error.
  • Some of these artificial-reality systems may also include optical subsystems having one or more lenses (e.g., conventional concave or convex lenses, Fresnel lenses, adjustable liquid lenses, etc.) through which a user may view a display screen.
  • optical subsystems may serve a variety of purposes, including to collimate (e.g., make an object appear at a greater distance than its physical distance), to magnify (e.g., make an object appear larger than its actual size), and/or to relay (to, e.g., the viewer's eyes) light.
  • optical subsystems may be used in a non-pupil-forming architecture (such as a single lens configuration that directly collimates light but results in so-called pincushion distortion) and/or a pupil-forming architecture (such as a multi-lens configuration that produces so-called barrel distortion to nullify pincushion distortion).
  • a non-pupil-forming architecture such as a single lens configuration that directly collimates light but results in so-called pincushion distortion
  • a pupil-forming architecture such as a multi-lens configuration that produces so-called barrel distortion to nullify pincushion distortion
  • some of the artificial-reality systems described herein may include one or more projection systems.
  • display devices in augmented-reality system 100 and/or virtual-reality system 200 may include micro-LED projectors that project light (using, e.g., a waveguide) into display devices, such as clear combiner lenses that allow ambient light to pass through.
  • the display devices may refract the projected light toward a user's pupil and may enable a user to simultaneously view both artificial- reality content and the real world.
  • the display devices may accomplish this using any of a variety of different optical components, including waveguide components (e.g., holographic, planar, diffractive, polarized, and/or reflective waveguide elements), light-manipulation surfaces and elements (such as diffractive, reflective, and refractive elements and gratings), coupling elements, etc.
  • waveguide components e.g., holographic, planar, diffractive, polarized, and/or reflective waveguide elements
  • light-manipulation surfaces and elements such as diffractive, reflective, and refractive elements and gratings
  • coupling elements etc.
  • Artificial-reality systems may also be configured with any other suitable type or form of image projection system, such as retinal projectors used in virtual retina displays.
  • augmented-reality system 100 and/or virtual-reality system 200 may include one or more optical sensors, such as two- dimensional (2D) or 3D cameras, structured light transmitters and detectors, time-of-f light depth sensors, single-beam or sweeping laser rangefinders, 3D LiDAR sensors, and/or any other suitable type or form of optical sensor.
  • An artificial-reality system may process data from one or more of these sensors to identify a location of a user, to map the real world, to provide a user with context about real-world surroundings, and/or to perform a variety of other functions.
  • the artificial-reality systems described herein may also include one or more input and/or output audio transducers.
  • Output audio transducers may include voice coil speakers, ribbon speakers, electrostatic speakers, piezoelectric speakers, bone conduction transducers, cartilage conduction transducers, tragus-vibration transducers, and/or any other suitable type or form of audio transducer.
  • input audio transducers may include condenser microphones, dynamic microphones, ribbon microphones, and/or any other type or form of input transducer.
  • a single transducer may be used for both audio input and audio output.
  • the artificial-reality systems described herein may also include tactile (i.e., haptic) feedback systems, which may be incorporated into headwear, gloves, body suits, handheld controllers, environmental devices (e.g., chairs, floormats, etc.), and/or any other type of device or system.
  • Haptic feedback systems may provide various types of cutaneous feedback, including vibration, force, traction, texture, and/or temperature.
  • Haptic feedback systems may also provide various types of kinesthetic feedback, such as motion and compliance.
  • Haptic feedback may be implemented using motors, piezoelectric actuators, fluidic systems, and/or a variety of other types of feedback mechanisms.
  • Haptic feedback systems may be implemented independent of other artificial-reality devices, within other artificial-reality devices, and/or in conjunction with other artificial-reality devices.
  • artificial-reality systems may create an entire virtual experience or enhance a user's real-world experience in a variety of contexts and environments. For instance, artificial-reality systems may assist or extend a user's perception, memory, or cognition within a particular environment. Some systems may enhance a user's interactions with other people in the real world or may enable more immersive interactions with other people in a virtual world.
  • Artificial-reality systems may also be used for educational purposes (e.g., for teaching or training in schools, hospitals, government organizations, military organizations, business enterprises, etc.), entertainment purposes (e.g., for playing video games, listening to music, watching video content, etc.), and/or for accessibility purposes (e.g., as hearing aids, visual aids, etc.).
  • the embodiments disclosed herein may enable or enhance a user's artificial-reality experience in one or more of these contexts and environments and/or in other contexts and environments.
  • Some augmented-reality systems may map a user's and/or device's environment using techniques referred to as "simultaneous location and mapping" (SLAM).
  • SLAM mapping and location identifying techniques may involve a variety of hardware and software tools that can create or update a map of an environment while simultaneously keeping track of a user's location within the mapped environment.
  • SLAM may use many different types of sensors to create a map and determine a user's position within the map.
  • SLAM techniques may, for example, implement optical sensors to determine a user's location.
  • Radios including WiFi, Bluetooth, global positioning system (GPS), cellular or other communication devices may be also used to determine a user's location relative to a radio transceiver or group of transceivers (e.g., a WiFi router or group of GPS satellites).
  • Acoustic sensors such as microphone arrays or 2D or 3D sonar sensors may also be used to determine a user's location within an environment.
  • Augmented-reality and virtual-reality devices (such as systems 100 and 200 of FIGS. 1 and 2, respectively) may incorporate any or all of these types of sensors to perform SLAM operations such as creating and continually updating maps of the user's current environment.
  • SLAM data generated by these sensors may be referred to as "environmental data" and may indicate a user's current environment.
  • This data may be stored in a local or remote data store (e.g., a cloud data store) and may be provided to a user's AR/VR device on demand.
  • the systems described herein may also include an eyetracking subsystem designed to identify and track various characteristics of a user's eye(s), such as the user's gaze direction.
  • eye tracking may, in some examples, refer to a process by which the position, orientation, and/or motion of an eye is measured, detected, sensed, determined, and/or monitored.
  • the disclosed systems may measure the position, orientation, and/or motion of an eye in a variety of different ways, including through the use of various opticalbased eye-tracking techniques, ultrasound-based eye-tracking techniques, etc.
  • An eye-tracking subsystem may be configured in a number of different ways and may include a variety of different eye-tracking hardware components or other computer-vision components.
  • an eyetracking subsystem may include a variety of different optical sensors, such as two-dimensional (2D) or 3D cameras, time-of-flight depth sensors, single-beam or sweeping laser rangefinders, 3D LiDAR sensors, and/or any other suitable type or form of optical sensor.
  • a processing subsystem may process data from one or more of these sensors to measure, detect, determine, and/or otherwise monitor the position, orientation, and/or motion of the user's eye(s).
  • FIG. 3 is an illustration of an exemplary system 300 that incorporates an eyetracking subsystem capable of tracking a user's eye(s).
  • system 300 may include a light source 302, an optical subsystem 304, an eye-tracking subsystem 306, and/or a control subsystem 308.
  • light source 302 may generate light for an image (e.g., to be presented to an eye 301 of the viewer).
  • Light source 302 may represent any of a variety of suitable devices.
  • light source 302 can include a two-dimensional projector (e.g., a LCoS display), a scanning source (e.g., a scanning laser), or other device (e.g., an LCD, an LED display, an OLED display, an active-matrix OLED display (AMOLED), a transparent OLED display (TOLED), a waveguide, or some other display capable of generating light for presenting an image to the viewer).
  • the image may represent a virtual image, which may refer to an optical image formed from the apparent divergence of light rays from a point in space, as opposed to an image formed from the light ray's actual divergence.
  • optical subsystem 304 may receive the light generated by light source 302 and generate, based on the received light, converging light 320 that includes the image.
  • optical subsystem 304 may include any number of lenses (e.g., Fresnel lenses, convex lenses, concave lenses), apertures, filters, mirrors, prisms, and/or other optical components, possibly in combination with actuators and/or other devices.
  • the actuators and/or other devices may translate and/or rotate one or more of the optical components to alter one or more aspects of converging light 320.
  • various mechanical couplings may serve to maintain the relative spacing and/or the orientation of the optical components in any suitable combination.
  • eye-tracking subsystem 306 may generate tracking information indicating a gaze angle of an eye 301 of the viewer.
  • control subsystem 308 may control aspects of optical subsystem 304 (e.g., the angle of incidence of converging light 320) based at least in part on this tracking information. Additionally, in some examples, control subsystem 308 may store and utilize historical tracking information (e.g., a history of the tracking information over a given duration, such as the previous second or fraction thereof) to anticipate the gaze angle of eye 301 (e.g., an angle between the visual axis and the anatomical axis of eye 301).
  • historical tracking information e.g., a history of the tracking information over a given duration, such as the previous second or fraction thereof
  • eye-tracking subsystem 306 may detect radiation emanating from some portion of eye 301 (e.g., the cornea, the iris, the pupil, or the like) to determine the current gaze angle of eye 301. In other examples, eye-tracking subsystem 306 may employ a wavefront sensor to track the current location of the pupil.
  • Any number of techniques can be used to track eye 301. Some techniques may involve illuminating eye 301 with infrared light and measuring reflections with at least one optical sensor that is tuned to be sensitive to the infrared light. Information about how the infrared light is reflected from eye 301 may be analyzed to determine the position(s), orientation(s), and/or motion(s) of one or more eye feature(s), such as the cornea, pupil, iris, and/or retinal blood vessels.
  • the radiation captured by a sensor of eye-tracking subsystem 306 may be digitized (i.e., converted to an electronic signal). Further, the sensor may transmit a digital representation of this electronic signal to one or more processors (for example, processors associated with a device including eye-tracking subsystem 306).
  • Eye-tracking subsystem 306 may include any of a variety of sensors in a variety of different configurations.
  • eye-tracking subsystem 306 may include an infrared detector that reacts to infrared radiation.
  • the infrared detector may be a thermal detector, a photonic detector, and/or any other suitable type of detector.
  • Thermal detectors may include detectors that react to thermal effects of the incident infrared radiation.
  • one or more processors may process the digital representation generated by the sensor(s) of eye-tracking subsystem 306 to track the movement of eye 301.
  • these processors may track the movements of eye 301 by executing algorithms represented by computer-executable instructions stored on non-transitory memory.
  • on-chip logic e.g., an application-specific integrated circuit or ASIC
  • eye-tracking subsystem 306 may be programmed to use an output of the sensor(s) to track movement of eye 301.
  • eye-tracking subsystem 306 may analyze the digital representation generated by the sensors to extract eye rotation information from changes in reflections.
  • eye-tracking subsystem 306 may use corneal reflections or glints (also known as Purkinje images) and/or the center of the eye's pupil 322 as features to track over time.
  • eye-tracking subsystem 306 may use the center of the eye's pupil 322 and infrared or near-infrared, non-collimated light to create corneal reflections. In these embodiments, eye-tracking subsystem 306 may use the vector between the center of the eye's pupil 322 and the corneal reflections to compute the gaze direction of eye 301.
  • the disclosed systems may perform a calibration procedure for an individual (using, e.g., supervised or unsupervised techniques) before tracking the user's eyes.
  • the calibration procedure may include directing users to look at one or more points displayed on a display while the eye-tracking system records the values that correspond to each gaze position associated with each point.
  • eye-tracking subsystem 306 may use two types of infrared and/or near-infrared (also known as active light) eye-tracking techniques: bright-pupil and dark-pupil eye tracking, which may be differentiated based on the location of an illumination source with respect to the optical elements used. If the illumination is coaxial with the optical path, then eye 301 may act as a retroreflector as the light reflects off the retina, thereby creating a bright pupil effect similar to a red-eye effect in photography. If the illumination source is offset from the optical path, then the eye's pupil 322 may appear dark because the retroreflection from the retina is directed away from the sensor.
  • infrared and/or near-infrared also known as active light
  • bright-pupil tracking may create greater iris/pupil contrast, allowing more robust eye tracking with iris pigmentation, and may feature reduced interference (e.g., interference caused by eyelashes and other obscuring features).
  • Bright-pupil tracking may also allow tracking in lighting conditions ranging from total darkness to a very bright environment.
  • control subsystem 308 may control light source 302 and/or optical subsystem 304 to reduce optical aberrations (e.g., chromatic aberrations and/or monochromatic aberrations) of the image that may be caused by or influenced by eye 301.
  • control subsystem 308 may use the tracking information from eye-tracking subsystem 306 to perform such control.
  • control subsystem 308 may alter the light generated by light source 302 (e.g., by way of image rendering) to modify (e.g., pre-distort) the image so that the aberration of the image caused by eye 301 is reduced.
  • the disclosed systems may track both the position and relative size of the pupil (since, e.g., the pupil dilates and/or contracts).
  • the eye-tracking devices and components e.g., sensors and/or sources
  • the frequency range of the sensors may be different (or separately calibrated) for eyes of different colors and/or different pupil types, sizes, and/or the like.
  • the various eye-tracking components e.g., infrared sources and/or sensors
  • described herein may need to be calibrated for each individual user and/or eye.
  • the disclosed systems may track both eyes with and without ophthalmic correction, such as that provided by contact lenses worn by the user.
  • ophthalmic correction elements e.g., adjustable lenses
  • the color of the user's eye may necessitate modification of a corresponding eye-tracking algorithm.
  • eye-tracking algorithms may need to be modified based at least in part on the differing color contrast between a brown eye and, for example, a blue eye.
  • FIG. 4 is a more detailed illustration of various aspects of the eye-tracking subsystem illustrated in FIG. 3.
  • an eye-tracking subsystem 400 may include at least one source 404 and at least one sensor 4306.
  • Source 404 generally represents any type or form of element capable of emitting radiation.
  • source 404 may generate visible, infrared, and/or near-infrared radiation.
  • source 404 may radiate noncollimated infrared and/or near-infrared portions of the electromagnetic spectrum towards an eye 402 of a user.
  • Source 404 may utilize a variety of sampling rates and speeds.
  • the disclosed systems may use sources with higher sampling rates in order to capture fixational eye movements of a user's eye 402 and/or to correctly measure saccade dynamics of the user's eye 402.
  • any type or form of eye-tracking technique may be used to track the user's eye 402, including optical-based eye-tracking techniques, ultrasound-based eye-tracking techniques, etc.
  • Sensor 406 generally represents any type or form of element capable of detecting radiation, such as radiation reflected off the user's eye 402.
  • sensor 406 include, without limitation, a charge coupled device (CCD), a photodiode array, a complementary metal-oxide-semiconductor (CMOS) based sensor device, and/or the like.
  • CMOS complementary metal-oxide-semiconductor
  • sensor 406 may represent a sensor having predetermined parameters, including, but not limited to, a dynamic resolution range, linearity, and/or other characteristic selected and/or designed specifically for eye tracking.
  • eye-tracking subsystem 400 may generate one or more glints.
  • a glint 403 may represent reflections of radiation (e.g.
  • glint 403 and/or the user's pupil may be tracked using an eye-tracking algorithm executed by a processor (either within or external to an artificial reality device).
  • a processor may include a processor and/or a memory device in order to perform eye tracking locally and/or a transceiver to send and receive the data necessary to perform eye tracking on an external device (e.g., a mobile phone, cloud server, or other computing device).
  • FIG. 4 shows an example image 405 captured by an eye-tracking subsystem, such as eye-tracking subsystem 400.
  • image 405 may include both the user's pupil 408 and a glint 410 near the same.
  • pupil 408 and/or glint 410 may be identified using an artificial-intelligence-based algorithm, such as a computer-vision-based algorithm.
  • image 405 may represent a single frame in a series of frames that may be analyzed continuously in order to track the eye 402 of the user. Further, pupil 408 and/or glint 410 may be tracked over a period of time to determine a user's gaze.
  • eye-tracking subsystem 400 may be configured to identify and measure the inter-pupillary distance (IPD) of a user.
  • IPD inter-pupillary distance
  • eye-tracking subsystem 400 may measure and/or calculate the IPD of the user while the user is wearing the artificial reality system.
  • eye-tracking subsystem 400 may detect the positions of a user's eyes and may use this information to calculate the user's IPD.
  • the eye-tracking systems or subsystems disclosed herein may track a user's eye position and/or eye movement in a variety of ways.
  • one or more light sources and/or optical sensors may capture an image of the user's eyes.
  • the eye-tracking subsystem may then use the captured information to determine the user's inter-pupillary distance, interocular distance, and/or a 3D position of each eye (e.g., for distortion adjustment purposes), including a magnitude of torsion and rotation (i.e., roll, pitch, and yaw) and/or gaze directions for each eye.
  • infrared light may be emitted by the eye-tracking subsystem and reflected from each eye. The reflected light may be received or detected by an optical sensor and analyzed to extract eye rotation data from changes in the infrared light reflected by each eye.
  • the eye-tracking subsystem may use any of a variety of different methods to track the eyes of a user.
  • a light source e.g., infrared light-emitting diodes
  • the eye-tracking subsystem may then detect (e.g., via an optical sensor coupled to the artificial reality system) and analyze a reflection of the dot pattern from each eye of the user to identify a location of each pupil of the user.
  • the eyetracking subsystem may track up to six degrees of freedom of each eye (i.e., 3D position, roll, pitch, and yaw) and at least a subset of the tracked quantities may be combined from two eyes of a user to estimate a gaze point (i.e., a 3D location or position in a virtual scene where the user is looking) and/or an IPD.
  • a gaze point i.e., a 3D location or position in a virtual scene where the user is looking
  • IPD IPD
  • the distance between a user's pupil and a display may change as the user's eye moves to look in different directions.
  • the varying distance between a pupil and a display as viewing direction changes may be referred to as "pupil swim" and may contribute to distortion perceived by the user as a result of light focusing in different locations as the distance between the pupil and the display changes.
  • measuring distortion at different eye positions and pupil distances relative to displays and generating distortion corrections for different positions and distances may allow mitigation of distortion caused by pupil swim by tracking the 3D position of a user's eyes and applying a distortion correction corresponding to the 3D position of each of the user's eyes at a given point in time.
  • knowing the 3D position of each of a user's eyes may allow for the mitigation of distortion caused by changes in the distance between the pupil of the eye and the display by applying a distortion correction for each 3D eye position. Furthermore, as noted above, knowing the position of each of the user's eyes may also enable the eye-tracking subsystem to make automated adjustments for a user's IPD.
  • a display subsystem may include a variety of additional subsystems that may work in conjunction with the eye-tracking subsystems described herein.
  • a display subsystem may include a varifocal subsystem, a scene-rendering module, and/or a vergence-processing module.
  • the varifocal subsystem may cause left and right display elements to vary the focal distance of the display device.
  • the varifocal subsystem may physically change the distance between a display and the optics through which it is viewed by movingthe display, the optics, or both. Additionally, moving or translatingtwo lenses relative to each other may also be used to change the focal distance of the display.
  • the varifocal subsystem may include actuators or motors that move displays and/or optics to change the distance between them.
  • This varifocal subsystem may be separate from or integrated into the display subsystem.
  • the varifocal subsystem may also be integrated into or separate from its actuation subsystem and/or the eye-tracking subsystems described herein.
  • the display subsystem may include a vergence-processing module configured to determine a vergence depth of a user's gaze based on a gaze point and/or an estimated intersection of the gaze lines determined by the eye-tracking subsystem.
  • Vergence may refer to the simultaneous movement or rotation of both eyes in opposite directions to maintain single binocular vision, which may be naturally and automatically performed by the human eye.
  • a location where a user's eyes are verged is where the user is looking and is also typically the location where the user's eyes are focused.
  • the vergenceprocessing module may triangulate gaze lines to estimate a distance or depth from the user associated with intersection of the gaze lines.
  • the depth associated with intersection of the gaze lines may then be used as an approximation for the accommodation distance, which may identify a distance from the user where the user's eyes are directed.
  • the vergence distance may allow for the determination of a location where the user's eyes should be focused and a depth from the user's eyes at which the eyes are focused, thereby providing information (such as an object or plane of focus) for rendering adjustments to the virtual scene.
  • the vergence-processing module may coordinate with the eye-tracking subsystems described herein to make adjustments to the display subsystem to account for a user's vergence depth.
  • the eye-tracking subsystem may obtain information about the user's vergence or focus depth and may adjust the display subsystem to be closer together when the user's eyes focus or verge on something close and to be farther apart when the user's eyes focus or verge on something at a distance.
  • the eye-tracking information generated by the above-described eye-tracking subsystems may also be used, for example, to modify various aspect of how different computergenerated images are presented.
  • a display subsystem may be configured to modify, based on information generated by an eye-tracking subsystem, at least one aspect of how the computer-generated images are presented. For instance, the computer-generated images may be modified based on the user's eye movement, such that if a user is looking up, the computergenerated images may be moved upward on the screen. Similarly, if the user is looking to the side or down, the computer-generated images may be moved to the side or downward on the screen. If the user's eyes are closed, the computer-generated images may be paused or removed from the display and resumed once the user's eyes are back open.
  • eye-tracking subsystems can be incorporated into one or more of the various artificial reality systems described herein in a variety of ways.
  • one or more of the various components of system 300 and/or eye-tracking subsystem 400 may be incorporated into augmented-reality system 100 in FIG. 1 and/or virtual-reality system 200 in FIG. 2 to enable these systems to perform various eye-tracking tasks (including one or more of the eye-tracking operations described herein).
  • Artificial-reality systems such as those described in relation to FIGS. 1-4, and other applicable systems and devices, may be "factory-calibrated,” prior to distribution, to operate according to certain intrinsic parameters—such as temperature— and certain extrinsic parameters—such as location and orientation of various devices and/or sub-components of the system. However, these parameters may change during real-time use by an end-user causing the system to behave in unexpected ways.
  • the exemplary system 500 depicted in FIG. 5, may provide online calibration to update system parameters in real-time.
  • the term "online calibration” may refer to using runtime sensor measurements to estimate, in real-time, varying parameters of the system.
  • the system 500 may update system parameters based on deformable body mechanics of the system.
  • the system 500 may, in certain embodiments, be implemented with, in communication with, and/or embodied as an artificial-reality system such as those described in relation to FIGS. 1-4, and the system 500 may include a wearable frame of a head-mounted display.
  • FIG. 5 The discussion corresponding to FIG. 5 is an introduction to the system 500 and further details will be provided in conjunction with the description of the flow diagrams in FIGS. 6 and 7.
  • the online calibration subsystem 502 may receive and/or access sensor data from one or more sensors 522 coupled to a frame 518, transform the sensor data into one or more updated parameters based on the sensor data to account for varying real-time conditions during use of the frame 518, and output, store, and/or make available parameter updates to an image processing subsystem 524 (which processes images for display on the display 526) and/or an eyetracking subsystem 528 for use in eye-tracking operations of a user of the frame 518.
  • an image processing subsystem 524 which processes images for display on the display 526) and/or an eyetracking subsystem 528 for use in eye-tracking operations of a user of the frame 518.
  • the online calibration subsystem 502 may include a memory device 504 with modules 506 for performing one or more tasks and may include a data gathering module 508, a detection module 510, a determination module 512, and a compensation module 514. Although illustrated as separate elements, one or more of modules 506 in FIG. 5 may represent portions of a single module or application. As illustrated in FIG. 5, example system 500 may also include one or more physical processors, such as physical processor 516 that may access and/or modify one or more of modules 506 stored in memory 504.
  • the sensors 522 may provide data related to devices 520 on a frame 518, such as a wearable frame of a head-mounted display, for the online calibration subsystem 502.
  • the image processing subsystem 524 processes images for display by the display 526 and may perform image processing operations on individual image frames (such as chromatic aberration correction, gamma correction and/or adjustment, multi-image blending and/or overlaying, image distortion, asynchronous time warping, asynchronous space warping, and the like).
  • the display 526 may be similar to the display devices 115 described in relation to FIGS. 1 or a display of the system 200 in FIG. 2, although any suitable display for displaying images may be used.
  • the eye-tracking subsystem may be similar to the systems 300, 400 described in FIG. 3-4.
  • the various subsystems 502, 524, 528 and the display 526 may also be coupled to the frame 518, and may, in fact, be embodied as devices 520 on the frame 518.
  • all or a portion of the functionality of each module may be implemented in the online calibration subsystem 502, as depicted, or in the image processing subsystem 524, the display 526, the eye-tracking subsystem 528, or other suitable location in an artificial-reality or other suitable system.
  • the data gathering module 508 may gather sensor data from the one or more sensors 522. Any suitable sensors 522 may be used.
  • the sensors 522 may include sensors such as those described in relation to FIGS. 1-4, and may include, without limitation, IMUs, optical sensors, or other sensors that provide data indicative of various conditions and/or parameters of devices 520 coupled to the frame, such as position data, temperature data, and the like.
  • position refers not only to position in space, but may also include orientation, denoting which direction a device 520 is facing.
  • the sensors 522 may include SLAM sensors providing SLAM data.
  • the data gathering module 508 may gather sensor data from multiple sensors 522 located in various locations on the frame 518.
  • the detection module 510 may detect deformation of the frame 518 using sensor data, including, in certain examples, SLAM data.
  • the determination module 512 may determine how the deformation changes a position of a device 520 coupled to the frame based on the deformation.
  • the determination module 512 may use SLAM data to determine how deformation of the frame 518 changes the position of the device 520.
  • the determination module 512 may use expected deformation patterns of the frame 518 during deformation to determine changed position of devices 520 on the frame 518.
  • the device may be a component of a display system and the compensation module 514 may compensate for the changed position of the component of the display system when processing an image to be displayed to a user of the wearable frame.
  • Pre-set factory calibration may assume certain distances between devices 520 on the frame 518.
  • the compensation module 514 may provide online calibration during actual use to improve the accuracy of any suitable system affected by this displacement or other changing parameters during use such as, without limitation, the display 526, eye-tracking subsystem 528, and the like.
  • the compensation module 514 sends and/or otherwise shares updated parameter data to the image processing subsystem 524 and/or eyetracking subsystem 528 to account for frame deformation, temperature changes, or other runtime parameter changes in display of an image or in eye-tracking operations.
  • FIG. 6 is a flow diagram of an exemplary computer-implemented method 600 for online calibration based on deformable body mechanics.
  • the steps shown in FIG. 6 may be performed by any suitable computer-executable code and/or computing system, including the system(s) illustrated in FIG. 1-5.
  • each of the steps shown in FIG. 6 may represent an algorithm whose structure includes and/or is represented by multiple sub-steps, examples of which will be provided in greater detail below.
  • one or more of the systems described herein may detect deformation of a frame using at least one sensor.
  • the method may involve a variety of different frames.
  • the term "frame" refers to a support structure for various devices.
  • One example may include, without limitation, a wearable frame of a head-mounted display for an artificial-reality system like the systems of FIGS. 1-2., or the frame 802 embodied in the system of FIG. 8 which depicts exemplary augmented-reality glasses 800.
  • the frame 802 includes arms 806, a forward-facing side 804, and a nose bridge 810.
  • the wearable frame in one embodiment, may be configured in any number of different ways and include, house and/or be coupled to, any number or kind of devices.
  • the term "device” refers to an electronic or mechanical device, component, subcomponent, system, subsystem, sensor, and the like. Examples of devices may include, without limitation, an eye-tracking sensor, a display, a display sub-component such as a projector or a waveguide, an optical sensor, and other devices coupled to the frame that, in certain embodiments, are involved in displaying an image to the user, tracking the user's eyes, and the like.
  • the detection module 510 may, as part of the online calibration subsystem 502 of FIG. 5, detect deformation of a wearable frame that houses at least one component of a display system.
  • the component of the display system may be any number of display system components. Referring also to FIG. 8, which depicts a wearable frame 802 housing multiple display system components, examples of a display system component may be a left lens 808A, a right lens 808B, a left RGB optical sensor 815A, a right RGB optical sensor 815B, a left display projector 818A, and/or a right display projector 818B.
  • display system components may also include eye-tracking components such as eye-tracking optical sensors 820.
  • the wearable frame 802 in some examples, is configured in a "minimal" form factor to have an appearance of standard eyeglasses. As a result, in certain examples, the wearable frame 802 may be subject to deformation. In some examples, the wearable frame 802 is subject to elastic deformation. In certain examples, the term "elastic deformation” refers to temporary deformation such as bending or flexing where a shape returns substantially to an original form after the deformation.
  • the wearable frame 802 of FIG. 8 may be subject to deformation caused by torsion, or twisting caused by torque, in which the forward-facing side 804 may be twisted and the arms 806 may be substantially in opposite directions (e.g., one down and the other up) from an initial position 902.
  • a deformation caused by torsion in some examples, if not factored into the display of an image, may subject the user to vertical binocular disparity, which may cause discomfort and/or disorientation for a user.
  • FIG. 10 showing a configuration 1000, the wearable frame 802 of FIG.
  • the wearable frame 802 may be subject to deformation caused by the wearable frame 802 flexing in which the forward-facing side 804 and arms 806 may be bent outward toward the forward-facing side 804 from an initial position 1002.
  • a deformation caused by bending in some examples, if not factored into the display of an image, may subject the user to horizontal binocular disparity which may cause objects to appear at the wrong depth.
  • the wearable frame in some examples, is not limited to a glasses form factor and may be embodied with any suitable form factor subject to other forms of deformation.
  • the detection module 510 may, in certain embodiments, detect deformation of the wearable frame 802 via at least one sensor. Detection module 510 may receive data from any suitable type or form of sensor.
  • the sensor may represent one or more of a variety of different sensing mechanisms, such as a position sensor, a depth camera assembly, an optical sensor, a SLAM sensor, or any combination thereof.
  • the sensor may be embodied as the sensors described above in relation to FIGS. 1-4 or the sensors depicted coupled to the wearable frame 802 of FIG. 8 such as left and right IMUs 816.
  • the left IMU 816A on the left arm 806A of the frame 802 may detect motion in a first direction and the right IMU 816B on the right arm 806B of the frame 802 may detect motion in a second direction that is opposite in relation to the first direction indicative of the frame deforming in a flexing motion.
  • the sensor in certain examples, may be a torque sensor that measures a torque of the wearable frame 802 to detect a deformation caused by torsion.
  • torque may refer to a force that produces rotation or torsion on a body.
  • the torque sensor may be any suitable mechanism that measures torque, such as a torsional spring or a rotation sensor.
  • the torque sensor may be a strain gauge or a set of strain gauges.
  • the sensor may be a flex sensor that measures a bend of the wearable frame 802.
  • the term "bend" may refer to a force that tends to produce a curve in a body.
  • the senor may be an optical sensor that provides visual data which the detection module 510 uses to detect the deformation.
  • the sensor is an optical sensor detects changes in light output from a waveguide coupled to the wearable frame 802 (such as, for example, part of the display lenses 808 of the display device). The light output changes may be indicative of the deformation of the frame 802.
  • the detection module 510 may detect deformation using sensor data from an optical sensor using a marker as a frame of reference and may detect deformation by detecting a change in a position of the marker relative to the optical sensor.
  • the head-mounted display 1102 may include at least one marker 1110 visible to an optical sensor 1108 coupled to the head-mounted display 1102.
  • the marker 1110 may act as a reference point to the optical sensor 1108.
  • the detection module 510 may detect a change in a relative position between the optical sensor 1108 and the marker 1110 and detect deformation of the wearable frame 802 in response to detecting the change in the relative position.
  • the marker 1110 may be invisible to user (in visible light).
  • the marker 1110 may be an infrared ("IR") fiducial on a surface of the lens 1106.
  • An optical sensor 1108, such as an eye-tracking optical sensor, may track the relative pose between the optical sensor 1108 and the lens 1106 by detecting relative changes in position in relation to the marker 1110.
  • multiple fiducials may be placed on a surface 1106 in view of an optical sensor 1108, which may face away from forward-facing side 1104.
  • the fiducials may be passive (printed on) or be active (illuminated light emitting diodes ("LEDs”) or vertical cavity surface emitting lasers (“VCSELs”) for example).
  • the senor is a SLAM sensor that gathers SLAM data and the detection module 510 may use SLAM data from at least one SLAM sensor detect the deformation.
  • SLAM data may identify a position of the corresponding SLAM sensor.
  • the detection module 510 may compare expected positions of a SLAM sensors with actual positions of the SLAM sensors to detect a deformation of the frame 802. Any suitable type or number of SLAM sensors may be used.
  • the depicted example embodiment in FIG. 8 includes left and right-side SLAM optical sensors 812 and front SLAM optical sensors 814.
  • the one or more sensors may be located in any number of locations on the wearable frame 802 in any suitable configuration.
  • the wearable frame of FIG. 8 includes sensors on the forward-facing side 804, on the brow area, and on the arms 806.
  • the wearable frame 1202 may include a sensor 1204 in the nose bridge 1206, which may be near lens 1208A and/or lens 1208B.
  • the torque sensor or flex sensor described above, may be located in the nose bridge.
  • the step of detecting deformation of the wearable frame may also include detecting an amount of deformation.
  • the detection module 510 may also determine an amount of deformation based on the sensor data obtained from the one or more sensors.
  • the amount of deformation may include, in one example, a magnitude of deformation, a direction of deformation, and/or a type of deformation.
  • the detection module 510 may detect an amount of deformation in any number of ways.
  • SLAM sensor data from a left SLAM optical sensor 812A and a right SLAM optical sensor 812B may indicate that the left arm 806A of the wearable frame 802 is offset a certain distance from the right arm 806B of the wearable frame 802, indicating a deformation of a certain magnitude and direction.
  • the torque sensor may measure an amount of torque the frame 802 is subject to.
  • one or more of the systems described herein may determine how the deformation changes a position of a device of the wearable frame.
  • the determination module 512 may, as part of the online calibration subsystem 502 of FIG. 5, determine how the deformation changes a position of the component of the display system.
  • Deformations of the wearable frame 802 may cause devices coupled to the wearable frame to change their positions relative to other devices, sensors, and portions of the wearable frame 802.
  • a change in a position of one component of the display system relative to another component of the display system, the eye-tracking system, and/or even the display device itself may cause an undesirable effect, such as an image distortion, to the user of the wearable frame.
  • the determination module 512 may determine how the deformation changes the position of the component of the display system in a number of ways. In one embodiment, the determination module 512 determines how the deformation changes the position of the component of the display system based at least in part on identifying an expected deformation pattern of the wearable frame.
  • An expected deformation pattern in some examples, is an estimated pattern or movement, that the frame may make in case of deformation.
  • the expected deformation pattern may be determined in any number of ways, including, without limitation, prior observation and/or by using a model that represents the deformable body mechanics of the frame.
  • the term "deformable body mechanics" may refer to expected deformation patterns of a frame.
  • the wearable frame 802 may deform in specific, predictable, reoccurring patterns. Therefore, the locations of display components on the wearable frame 802 may likewise change positions, when deformed, in a predictable manner, depending on the amount of deformation.
  • the determination module 512 may use a model that estimates movement and changes in position of display components, depending on the amount of deformation as indicated by the sensor data and/or detection module 510, to determine updated locations of display components on the wearable frame 802.
  • the determination module 512 may, using the expected deformation patterns based on the magnitude, direction, and/or type of the deformation, determine the changed position of component of the display.
  • the detection module 510 may detection a deformation of the wearable frame 802 caused by torsion (in response to analyzing sensor data as described above) and may determine an amount of deformation.
  • the determination module 512 may reference a model of the expected deformation pattern to obtain the changed position of display components, such as the display lenses 808 and/or the display projectors 818.
  • the expected deformation patterns may be predetermined for use in real-time by an end-user of the wearable frame 802.
  • a manufacturer of the wearable frame 802 may create one or more models based on observation and testing, the material and shape of the frame, certain temperatures, and the like. Certain patterns of deformation corresponding to various levels of deformation may be recorded in the model.
  • the one or models may be stored, in certain examples, for access by the online calibration subsystem 502 during use in the field.
  • one or more of the systems described herein may compensate for the changed position of the component of the display system in its various operations.
  • the compensation module 514 may, as part of the online calibration subsystem 502 of FIG. 5, compensate for the changed position of the component of the display system when processing an image on the display to be displayed to a user of the wearable frame.
  • Examples of the compensation module 514 may compensate for the changed component of the display system in a variety of different ways and in a variety of different stages of image processing. As stated above, in certain embodiments, all or a portion of the functionality of the compensation module 514 may be implemented in the online calibration subsystem 502, as depicted, or in the image processing subsystem 524, the display itself, the eye-tracking subsystem 528, or other suitable locations in an artificial-reality or other suitable system. For example, the compensation module 514 may output updated parameters (e.g., parameters that may describe changed positions of display components) that the image processing subsystem 52.4 uses to process the image. In this example, the updated parameters may supplement or modify portions of sensor data used to configure an image for display.
  • updated parameters e.g., parameters that may describe changed positions of display components
  • the detection module 510 may detect a frame deformation
  • the determination module may determine that the lens of the display system has shifted, in real-time slightly to one side in relation to a display projector.
  • the compensation module 514 may compensate for the changed position of the lens by adjusting the system parameter that defines the position of the lens in relation to the display projector to account for the change in position.
  • the adjusted parameter may then be received by the image processing subsystem 524 which uses the updated parameter and any pre-defined parameters to process an image to the user.
  • the compensation module 514 may compensate for the change in position of the component of the display system in any suitable stage of image processing such as in a pre-render operation, render operation, or post-render operation of an image frame of a sequence of image frames that are processed by the image processing subsystem 524 and ultimately displayed by the display 808.
  • the compensation module 514 may update one or more parameters that dictate image rendering such that the resulting image frame accounts for the deformation. For example, referring also to FIG. 8, the orientation of the left and right (“Red Green Blue”) RGB optical sensors 815, which capture images for rendering into a resulting display image, may each move slightly inward as a result of a flex deformation.
  • the detection module 510 and determination module 512 may detection a deformation and determine a changed position of the various components of the display system respectfully including the left and right RGB optical sensors 815B.
  • the compensation module 514 may send updated parameters to the image processing subsystem 524 to reposition the image frame representing the moment in which the RGB optical cameras 815B changed positions due to the position change being caused by a frame deformation rather than a bona fide head movement of the user.
  • the compensation module 514 may compensate for the change in position as part of a warp engine process responsible for time and space warping. For example, the compensation module 514 may feed updated parameters into the warp engine process which performs such time and space warping operations.
  • An image frame may be modified prior to display through operations such as asynchronous time warping or asynchronous space warping.
  • the compensation module 514 may account for the changed position of a component of a display system when processing an image to be displayed by updating an existing image frame based on the changed position of the component and displaying the updated existing image frame (asynchronous time warping) or by modifying an object and/or scenery in an image frame based on the changed position of the component (asynchronous space warping).
  • the compensation module 514 may account for the shift by slightly rotating one or more image frames (that have already been rendered) to account for the change in position prior to being displayed to the user. In compensation for the changed position of the display system component when processing the image, the compensation module 514 helps ensure a smoother user experience of a user of the wearable frame.
  • the component of the display system is an eye-tracking component.
  • the compensation module 514 may compensate for the changed position by accounting for the changed position of the eye-tracking component in performing eye-tracking of a user of the wearable frame.
  • the eyetracking subsystem 528 may have been previously calibrated to assume certain distances between an eye-tracking sensor 820, the display lenses 808, and other devices.
  • the eye-tracking subsystem 528 may incorrectly assume the user has moved their eye and an image may be projected in an incorrect position on the user's eye.
  • the compensation module 514 may update one or more parameters specifying new position for the right eye-tracking sensor 820B such that the eye-tracking subsystem 528 correctly tracks the user's eye and the image is projected in the intended location on the user's eye.
  • FIG. 7 is a flow diagram of an exemplary computer-implemented method 700 for online calibration based on deformable body mechanics.
  • the steps shown in FIG. 7 may be performed by any suitable computer-executable code and/or computing system, including the system(s) illustrated in FIG. 1-5.
  • each of the steps shown in FIG. 7 may represent an algorithm whose structure includes and/or is represented by multiple sub-steps, examples of which will be provided in greater detail below.
  • one or more of the systems described herein may gather SLAM sensor data that may be used for online calibration.
  • the data gathering module 508 may, as part of the online calibration subsystem 502 of FIG. 5, gather, via a sensor coupled to a frame, SLAM data that indicates a position, within an environment, of a device coupled to the frame.
  • the method may involve a variety of different frames.
  • the frame may serve as a support structure for one or more devices.
  • Examples of frames may include, without limitation, a wearable frame of a head-mounted display for an artificial-reality system similar to the systems of FIGS. 1-2, or the frame 802 embodied in the system of FIG. 8 as described above.
  • the frame may be any suitable frame having at least one sensor and at least one device coupled thereon.
  • the sensor may be any suitable sensor that collects, records, transmits, and/or gathers SLAM data and may be similar to the SLAM sensors described above in relation to FIGS. 1, 2, 5, and 6.
  • the device may be any number of devices such as those described above in relation to FIG. 6 and may include, without limitation, IMUs, optical sensors, eye-tracking components, components of a display system, a display device, and the like.
  • the environment may be a virtual or real-world environment mapped by SLAM techniques using SLAM sensor data in which the SLAM techniques maintain a position of a user or device.
  • the data gathering module 508 may gather SLAM data in a variety of ways.
  • the data gathering module 508 may gather SLAM data from sensors located on various parts of the frame. The SLAM data from one sensor may then be compared with SLAM data from another sensor. Specifically, referring also to FIG. 8, the data gathering module 508, as part of step 702, may gather SLAM data with a first sensor coupled to the frame 802 in a first location on the frame 802 ("first" SLAM data) and gather SLAM data with a second sensor coupled to the frame 802 in a second location on the frame 802 ("second" SLAM data) where the second location is different from the first location. For example, in the example augmented-reality glasses of FIG 8., the data gathering module 508 may receive sensor output from the left SLAM optical sensor 812A in addition to receiving sensor output from the right SLAM optical sensor 812B.
  • one or more of the systems described herein may detect deformation of the frame 802.
  • the detection module 510 may, as part of the online calibration subsystem 502 of FIG. 5, detect deformation of the frame 802 based at least in part on the SLAM data.
  • the frame 802 in some examples, may be subject to deformation during real-world use by an enduser.
  • the frame may, in some examples, be configured with a "minimal" form factor to have an appearance of standard eyeglasses.
  • the frame 802 may be subject to deformation caused by torsion or flexing as described above in relation to FIGS. 9-10.
  • the detection module 510 may compare the first and second SLAM data with first and second baseline SLAM data (e.g. SLAM data collected while the frame 802 is not deformed) and detect a difference between the first and second SLAM data and the first and second baseline SLAM data, which indicates that the first location on the frame 802 has changed relative to the second location on the frame 802 due to the deformation of the frame 802.
  • first and second baseline SLAM data e.g. SLAM data collected while the frame 802 is not deformed
  • the detection module 510 may determine that data from the left SLAM optical sensor 812A and data from the right SLAM optical sensor 812B, when compared with baseline SLAM data from both SLAM optical sensors 812, indicates that the left arm 806A is lower in relation to the right arm 806B than in a typical neutral pose of the frame 802. [0114] In addition to the SLAM sensor data, the detection module 510 may, as described above in relation to FIG.
  • sensor 6 use sensor data from various other sensors to detect deformation, including, without limitation, a torque sensor, a flex sensor, an optical sensor detecting light output from a waveguide, and an optical sensor using a frame-of-reference marker such as a fiducial.
  • the detection module 510 may also determine an amount of deformation based on the sensor data obtained from the one or more SLAM sensors.
  • the amount of deformation may include, in one example, a magnitude of deformation, a direction of deformation, and/or a type of deformation.
  • one or more of the systems described herein may determine how deformation of the frame changes position of a particular device.
  • the determination module 512 may, as part of the online calibration subsystem 502 of FIG. 5, use the SLAM data to determine how deformation of the frame 802 changes the position of the device relative to the environment.
  • Determining how deformation of the frame 802 changes the position of the device may be performed in a variety of ways.
  • the step 706 of using the SLAM data to determine how deformation of the frame 802 changes the position of a device includes measuring the deformation of the frame 802 with expected deformation patterns of the frame 802 during deformation. For example, referring to FIG. 9-10, the frame 802 may deform in specific, predictable, reoccurring patterns. Therefore, the locations of devices on the frame 802 may likewise change positions in a predictable manner.
  • the determination module 512 may use a model that estimates movement and changes in position of devices, depending on the amount of deformation as indicated by the sensor data and/or detection module 510, to determine how deformation of the frame 802 changes the position of the device.
  • one or more of the systems described herein may compensate for the changed position of the device.
  • the compensation module 514 may, as part of the online calibration subsystem 502 of FIG. 5, compensate for the changed position of the device relative to the environment.
  • examples of the compensation module 514 may compensate for the changed position of the device in a variety of different ways and for a variety of different devices.
  • the compensation module 514 may compensate for movement of components related to the audio system by adjusting audio in one or more output audio transducers that produce audio for the user.
  • the compensation module 514 may compensate for a change in position of a device related to operation of the display when processing an image to be displayed on the display as described above in relation to FIG. 6.
  • the device may be a display and the compensation module 514 may compensate for the changed position of the display by accounting for the changed position of the device relative to the environment in displaying an image on the display such that the image compensates for the deformation of the frame.
  • a deformation caused by torsion of the frame 802 may cause each of the display lenses 808 to slightly turn in opposite directions relative to other frame devices and to the user's eyes.
  • the compensation module 514 may compensate for the changed position of the display 808 by adjusting position parameters for the display lenses 808 for the image processing subsystem 524 to use rather than the default position parameters when processing an image for the display.
  • the adjusted parameters may, for example, independently adjust the images that will be displayed through each display lens 808 to account for the changed position.
  • the device may be an eye-tracking component and the compensation module may compensate for the changed position of the device relative to the environment by accounting for the changed position of the device relative to the environment in performing eye-tracking of a user wearing a head-mounted display coupled to the frame, such that the eye-tracking compensates for the deformation of the frame.
  • the compensation module 514 may update position parameters for various eye-tracking components such as eye-tracking optical sensors or projectors and provide them to the eye-tracking subsystem 528 such that the eye-tracking subsystem 528 is correctly calibrated for eye-tracking operations.
  • online calibration of an artificial-reality or other suitable system allows for using runtime sensor measurements to estimate, in real-time and in actual use by an end-user, intrinsic and extrinsic parameters of the system. This allows for greater accuracy in estimating position and/or orientation devices on a frame of an artificial-reality system thereby providing greater accuracy in displaying images to the user and performing eye-tracking operations on the user's eyes.
  • Certain artificial-reality systems may have form factors that are not rigid enough to prevent flexing and torsion causing the frame components to move relative to each other during use.
  • the online calibration process may account for frame deformation in the artificial-reality system by applying deformable body mechanics of the system to estimate positions of devices on the frame, thereby enhancing the user's experience by minimizing vertical and horizontal binocular disparity.
  • a computer-implemented method may include (i) detecting, via at least one sensor, deformation of a wearable frame that houses at least one component of a display system, (ii) determining how the deformation changes a position of the component of the display system, and (iii) compensating for the changed position of the component of the display system when processing an image to be displayed to a user of the wearable frame.
  • Example 2 The computer-implemented method of Example 1, where determining how the deformation changes the position of the component of the display system is based at least in part on identifying an expected deformation pattern of the wearable frame.
  • Example 3 The computer-implemented method of any of Examples 1-2, where compensating for the changed position of the component of the display system occurs in one or more of a pre-render operation, render operation, or post-render operation of an image frame of a sequence of image frames.
  • Example 4 The computer-implemented method of any of Examples 1-3, where compensating for the changed position includes at least one of (1) updating an existing image frame based on the changed position of the component and displaying the updated existing image frame, or (2) modifying one or more of an object or scenery in an image frame based on the changed position of the component.
  • Example 5 The computer-implemented method of any of Examples 1-4, where the component of the display system is an eye-tracking component and where compensating for the changed position of the component of the display system includes accounting for the changed position of the eye-tracking component in performing eye-tracking of a user of the wearable frame.
  • Example 6 The computer-implemented method of any of Examples 1-5, where the sensor is at least one of (1) a torque sensor that measures a torque of the wearable frame, or (2) a flex sensor that measures a bend of the wearable frame.
  • Example 7 The computer-implemented method of any of Examples 1-6, where the sensor is an optical sensor and where detecting deformation of the wearable frame includes detecting, via the optical sensor, a change in light output from a waveguide coupled to the wearable frame, the light output change indicative of the deformation of the wearable frame.
  • Example 8 The computer-implemented method of any of Examples 1-7, where the sensor is an optical sensor and where detecting deformation of the wearable frame includes detecting, via the optical sensor, a change in a position of a marker relative to the optical sensor.
  • Example 9 The computer-implemented method of any of Examples 1-8, where the sensor is a simultaneous location and mapping (“SLAM”) sensor that gathers SLAM data, where the detection of the deformation of the wearable frame is based at least in part on the SLAM data.
  • SLAM simultaneous location and mapping
  • Example 10 The computer-implemented method of any of Examples 1-9, where the sensor is at least one of an inertial measurement unit (IMU) or an optical sensor.
  • IMU inertial measurement unit
  • optical sensor optical sensor
  • a computer-implemented method may include (i) gathering, via a sensor coupled to a frame, simultaneous location and mapping ("SLAM") data that indicates a position, within an environment, of a device coupled to the frame, (ii) detecting, based at least in part on the SLAM data, deformation of the frame, (iii) using the SLAM data to determine how deformation of the frame changes the position of the device relative to the environment, and (iv) compensating for the changed position of the device relative to the environment.
  • SLAM simultaneous location and mapping
  • Example 12 The computer-implemented method of Example 11, where using the SLAM data to determine how deformation of the frame changes the position of the device is based at least in part on identifying an expected deformation pattern of the frame.
  • Example 13 The computer-implemented method of any of Examples 11-12, where gathering, via the sensor coupled to the frame, further includes (1) gathering first SLAM data with a first sensor coupled to the frame in a first location on the frame, (2) gathering second SLAM data with a second sensor coupled to the frame in a second location on the frame, the second location different from the first location, and where detecting, based at least in part on the SLAM data, deformation of the frame further includes (1) comparing the first and second SLAM data with first and second baseline SLAM data, and (2) detecting a difference between the first and second SLAM data and the first and second baseline SLAM data indicating that the first location on the frame has changed relative to the second location on the frame due to the deformation of the frame.
  • Example 14 The computer-implemented method of any of Examples 11-13, where the sensor includes at least one of an inertial measurement unit (IMU) or an optical sensor.
  • IMU inertial measurement unit
  • optical sensor optical sensor
  • Example 15 The computer-implemented method of any of Examples 11-14, where the device includes a display and where compensating for the changed position of the device relative to the environment includes accounting for the changed position of the device relative to the environment in displaying an image on the display such that the image compensates for the deformation of the frame.
  • Example 16 The computer-implemented method of any of Examples 11-15, where the device includes an eye-tracking component and where compensating for the changed position of the device relative to the environment includes accounting for the changed position of the eye-tracking component relative to the environment in performing eye-tracking of a user wearing a head-mounted display coupled to the frame, such that the eye-tracking compensates for the deformation of the frame.
  • Example 17 The computer-implemented method of any of Examples 11-16, where the deformation of the wearable frame is detected based in part on at least one of: (1) a torque sensor that measures a torque of the frame, or (2) a flex sensor that measures a bend of the frame.
  • Example 18 The computer-implemented method of any of Examples 11-17, wherein the frame includes a wearable frame of a head-mounted display.
  • a system may include (i) a frame, (ii) at least one sensor coupled to the frame, the sensor gathering simultaneous location and mapping (“SLAM") data, (iii) a device coupled to the frame, the SLAM data indicating a position of the device within an environment, and (iv) an online calibration subsystem that (1) detects, based at least in part on the SLAM data, deformation of the frame, and (2) uses the SLAM data to determine how deformation of the frame changes the position of the device relative to the environment.
  • SLAM simultaneous location and mapping
  • Example 20 The system of Example 19, where the frame includes a wearable frame of a head-mounted display.
  • computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein.
  • these computing device(s) may each include at least one memory device and at least one physical processor.
  • the term "memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions.
  • a memory device may store, load, and/or maintain one or more of the modules described herein.
  • Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
  • the term "physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions.
  • a physical processor may access and/or modify one or more modules stored in the above-described memory device.
  • Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
  • modules described and/or illustrated herein may represent portions of a single module or application.
  • one or more of these modules may represent one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks.
  • one or more of the modules described and/or illustrated herein may represent modules stored and configured to run on one or more of the computing devices or systems described and/or illustrated herein.
  • One or more of these modules may also represent all or portions of one or more special-purpose computers configured to perform one or more tasks.
  • one or more of the modules described herein may transform data, physical devices, and/or representations of physical devices from one form to another.
  • one or more of the modules recited herein may receive sensor data to be transformed, (which indicates a position, within an environment, of a device coupled to the frame), transform the sensor data by detecting deformation of a frame using the sensor data, output a result of the transformation as the detected frame transformation, use the result of the transformation to determine how deformation of the frame changes the position of the device relative to an environment, and store the result of the transformation as updated parameters to record of the changed position of the device.
  • one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form to another by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
  • the term "computer-readable medium” generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions.
  • Examples of computer-readable media include, without limitation, transmissiontype media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid- state drives and flash media), and other distribution systems.
  • transmissiontype media such as carrier waves
  • non-transitory-type media such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid- state drives and flash

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Le procédé mis en œuvre par ordinateur divulgué pour un étalonnage en ligne basé sur la mécanique du corps déformable peut consister (i) à détecter, par l'intermédiaire d'au moins un capteur, la déformation d'un cadre pouvant être porté par l'utilisateur qui loge au moins un composant d'un système d'affichage, (ii) à déterminer la manière dont la déformation modifie une position du composant du système d'affichage, et (iii) à compenser la position modifiée du composant du système d'affichage lors du traitement d'une image à afficher à un utilisateur du cadre pouvant être porté par l'utilisateur. Dans un mode de réalisation, le capteur comprend un capteur d'emplacement et de mappage simultanés ("SLAM") collectant des données SLAM et la détection de la déformation du cadre pouvant être porté par l'utilisateur est basée au moins en partie sur les données SLAM. Divers autres procédés, systèmes et supports lisibles par ordinateur sont également divulgués.
PCT/GR2021/000057 2021-09-06 2021-09-06 Étalonnage en ligne basé sur la mécanique de corps déformable WO2023031633A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/GR2021/000057 WO2023031633A1 (fr) 2021-09-06 2021-09-06 Étalonnage en ligne basé sur la mécanique de corps déformable

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/GR2021/000057 WO2023031633A1 (fr) 2021-09-06 2021-09-06 Étalonnage en ligne basé sur la mécanique de corps déformable

Publications (1)

Publication Number Publication Date
WO2023031633A1 true WO2023031633A1 (fr) 2023-03-09

Family

ID=78135015

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GR2021/000057 WO2023031633A1 (fr) 2021-09-06 2021-09-06 Étalonnage en ligne basé sur la mécanique de corps déformable

Country Status (1)

Country Link
WO (1) WO2023031633A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150316767A1 (en) * 2014-05-01 2015-11-05 Michael John Ebstyne 3d mapping with flexible camera rig
US20190287493A1 (en) * 2018-03-15 2019-09-19 Magic Leap, Inc. Image correction due to deformation of components of a viewing device
US20200018968A1 (en) * 2018-07-13 2020-01-16 Magic Leap, Inc. Systems and methods for display binocular deformation compensation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150316767A1 (en) * 2014-05-01 2015-11-05 Michael John Ebstyne 3d mapping with flexible camera rig
US20190287493A1 (en) * 2018-03-15 2019-09-19 Magic Leap, Inc. Image correction due to deformation of components of a viewing device
US20200018968A1 (en) * 2018-07-13 2020-01-16 Magic Leap, Inc. Systems and methods for display binocular deformation compensation

Similar Documents

Publication Publication Date Title
US10698483B1 (en) Eye-tracking systems, head-mounted displays including the same, and related methods
US11039651B1 (en) Artificial reality hat
US11176367B1 (en) Apparatuses, systems, and methods for mapping a surface of an eye via an event camera
US11662812B2 (en) Systems and methods for using a display as an illumination source for eye tracking
JP2021532464A (ja) 左および右ディスプレイとユーザの眼との間の垂直整合を決定するためのディスプレイシステムおよび方法
US11715331B1 (en) Apparatuses, systems, and methods for mapping corneal curvature
US20230053497A1 (en) Systems and methods for performing eye-tracking
US11120258B1 (en) Apparatuses, systems, and methods for scanning an eye via a folding mirror
US20230037329A1 (en) Optical systems and methods for predicting fixation distance
US11659043B1 (en) Systems and methods for predictively downloading volumetric data
US11782279B2 (en) High efficiency pancake lens
US20230043585A1 (en) Ultrasound devices for making eye measurements
WO2023031633A1 (fr) Étalonnage en ligne basé sur la mécanique de corps déformable
US20230411932A1 (en) Tunable laser array
US20230067343A1 (en) Tunable transparent antennas implemented on lenses of augmented-reality devices
US11906747B1 (en) Head-mounted device having hinge assembly with wiring passage
WO2023023206A1 (fr) Systèmes et procédés permettant d'effectuer un suivi oculaire
US11703618B1 (en) Display device including lens array with independently operable array sections
US20230341812A1 (en) Multi-layered polarization volume hologram
US20240094594A1 (en) Gradient-index liquid crystal lens having a plurality of independently-operable driving zones
CN117882032A (zh) 用于执行眼动追踪的系统和方法
US11815692B1 (en) Apparatus, system, and method for blocking light from eyecups
EP4354890A1 (fr) Synchronisation d'une caméra de disparité
WO2023014918A1 (fr) Systèmes et procédés optiques de prédiction de distance de fixation
US11789544B2 (en) Systems and methods for communicating recognition-model uncertainty to users

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21790979

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE