CN108700939B - System and method for augmented reality - Google Patents

System and method for augmented reality Download PDF

Info

Publication number
CN108700939B
CN108700939B CN201780010073.0A CN201780010073A CN108700939B CN 108700939 B CN108700939 B CN 108700939B CN 201780010073 A CN201780010073 A CN 201780010073A CN 108700939 B CN108700939 B CN 108700939B
Authority
CN
China
Prior art keywords
electromagnetic
sensor
component
head
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780010073.0A
Other languages
Chinese (zh)
Other versions
CN108700939A (en
Inventor
S·A·米勒
M·J·伍兹
D·C·伦德马克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Magic Leap Inc
Original Assignee
Magic Leap Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/062,104 external-priority patent/US20160259404A1/en
Application filed by Magic Leap Inc filed Critical Magic Leap Inc
Priority to CN202210650785.1A priority Critical patent/CN114995647A/en
Publication of CN108700939A publication Critical patent/CN108700939A/en
Application granted granted Critical
Publication of CN108700939B publication Critical patent/CN108700939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/23Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console
    • A63F13/235Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console using a wireless connection, e.g. infrared or piconet
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/211Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5255Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Position Input By Displaying (AREA)

Abstract

An augmented reality display system includes an electromagnetic field emitter for emitting a known magnetic field in a known coordinate system. The system also includes an electromagnetic sensor for measuring a parameter related to a magnetic flux at the electromagnetic sensor resulting from a known magnetic field. The system further comprises a depth sensor for measuring distances in a known coordinate system. Additionally, the system includes a controller for determining pose information of the electromagnetic sensor relative to the electromagnetic field emitter in the known coordinate system based at least in part on the parameter related to the magnetic flux measured by the electromagnetic sensor and the distance measured by the depth sensor. Further, the system includes a display system for displaying virtual content to a user based at least in part on the pose information of the electromagnetic sensor relative to the electromagnetic field emitter.

Description

System and method for augmented reality
Cross Reference to Related Applications
This application claims priority from U.S. provisional patent application serial No. 62/292,185 filed on 5/2/2016 and U.S. provisional patent application serial No. 62/298,993 filed on 23/2/2016. This application is a continuation-in-part application of united states patent application serial No. 15/062,104 filed on day 3/5 2016 claiming priority to united states provisional patent application serial No. 62/128,993 filed on day 3/5 in 2015 and united states provisional patent application serial No. 62/292,185 filed on day 2/5 in 2016. This application is also related to U.S. provisional patent application serial No. 62/301,847 filed on 3/1/2016. The entire contents of the above application are incorporated herein by reference.
Technical Field
The present disclosure relates to systems and methods for locating the position and orientation of one or more objects in the context of an augmented reality system.
Background
Modern computing and display technology has facilitated the development of systems for so-called "virtual reality" or "augmented reality" experiences, in which digitally reproduced images or portions thereof are presented to a user in a manner in which they appear to be, or may be perceived as, real. Virtual reality (or "VR") scenes typically involve the presentation of digital or virtual image information, while being opaque to other real-world visual inputs; augmented reality (or "AR") scenes typically involve the presentation of digital or virtual image information as an enhancement to the visualization of the real world around the user.
For example, referring to fig. 1, an augmented reality scene (4) is depicted where a user of AR technology sees a real-world park-like setting (6) featuring people, trees, buildings in the background, and a concrete platform (1120). In addition to these items, the user of AR technology also perceives that he "sees" a robotic statue (1110) standing on a real world platform (1120), as well as a flying cartoon avatar character (2) that appears to be an avatar of bumblebee, even though these elements (2, 1110) are not present in the real world. The human visual perception system has proven to be very complex and it is challenging to produce a comfortable, natural, rich presentation that facilitates virtual image elements along with other virtual or real-world image elements.
For example, head-mounted AR displays (or head-mounted displays or smart glasses) are typically at least loosely coupled to the user's head and therefore move as the user's head moves. If the display system detects head motion of the user, the data being displayed may be updated to account for changes in head pose.
As an example, if a user wearing a head mounted display views a virtual representation of a three-dimensional (3D) object on the display and walks around the area where the 3D object appears, the 3D object may be re-rendered for each viewpoint, giving the user the sensation that he or she is walking around the object occupying real space. If a head mounted display is used to present multiple objects within a virtual space (e.g., a rich virtual world), measurements of head pose (i.e., the position and orientation of the user's head) may be used to re-render the scene to match the dynamically changing head position and orientation of the user and provide enhanced immersion in the virtual space.
In an AR system, the detection or calculation of head gestures may facilitate the rendering of virtual objects by a display system such that they appear to occupy space in the real world in a manner that is meaningful to a user. Furthermore, detection of position and/or orientation detection of real objects related to the user's head or the AR system, such as handheld devices (also referred to as "totems"), haptic devices, or other real physical objects, may also facilitate the display system in enabling the user to effectively interact with particular aspects of the AR system when presenting display information to the user. As the user's head moves in the real world, the virtual object may be re-rendered according to the head pose such that the virtual object appears to remain stable relative to the real world. At least for AR applications, the placement of virtual objects in space relative to physical objects (e.g., rendered spatially close to physical objects in two or three dimensions) can be a nontrivial problem. For example, head movements may significantly complicate the placement of virtual objects in a view of the surroundings (view). This is true whether the view is captured as an image of the surrounding environment and then projected or displayed to the end user, or the end user directly perceives the view of the surrounding environment. For example, head movements may cause the end user's field of view to change, which may require updates to the positions at which various virtual objects are displayed in the end user's field of view. Additionally, head movement may occur over a variety of ranges and speeds. The head movement speed not only varies between different head movements, but also varies within or across the range of a single head movement. For example, the head movement speed may initially increase (e.g., linearly or non-linearly) from a starting point and may decrease when an ending point is reached, thereby obtaining a maximum speed somewhere between the starting point and the ending point of the head movement. Rapid head movements may even exceed the capabilities of a particular display or projection technique, presenting the end user with an image that appears to be moving uniformly and/or smoothly.
Head tracking accuracy and latency (i.e., the elapsed time from the user moving his or her head until the image is updated and displayed to the user) is a challenge for VR and AR systems. Especially for display systems that fill a large part of the user's field of view with virtual elements, it is crucial that the accuracy of head tracking is high and that the overall system delay from the first detection of head movements is very low for the update of the light delivered by the display to the user's visual system. If the delay is high, the system can create a mismatch between the user's vestibule and the visual sensory system and create a user perception scenario that can lead to motion sickness or simulator syndromes. If the system delay is high, the apparent position of the virtual object will appear unstable during fast head movements.
In addition to head mounted display systems, other display systems may also benefit from accurate and low latency head pose detection. These include head-tracking display systems in which the display is not worn on the body of the user, but is mounted, for example, on a wall or other surface. The head-tracking display looks like a window on the scene, and as the user moves his head relative to the "window", the scene is re-rendered to match the user's changing viewpoint. Other systems include head mounted projection systems, where a head mounted display projects light into the real world.
Additionally, to provide a realistic augmented reality experience, AR systems may be designed to interact with users. For example, multiple users may play a ball game using a virtual ball and/or other virtual objects. One user may "grab" the virtual ball and throw the ball back to another user. In another embodiment, a totem (e.g., a real bat communicatively coupled to the AR system) may be provided to the first user to hit the virtual ball. In other embodiments, the AR user may be presented with a virtual user interface to allow the user to select one of many options. The user may use totems, haptic devices, wearable components, or simply touch a virtual screen to interact with the system.
Detecting the head pose and orientation of the user, and detecting the physical location of real objects in space enables the AR system to display virtual content in an efficient, pleasant manner. However, while these capabilities are critical to AR systems, they are difficult to implement. In other words, the AR system must identify the physical location of the real object (e.g., the user's head, totem, haptic device, wearable component, user's hand, etc.) and associate the physical coordinates of the real object with virtual coordinates corresponding to one or more virtual objects being displayed to the user. This requires highly accurate sensors and sensor identification systems that quickly track the position and orientation of one or more objects. Current methods do not perform positioning with satisfactory speed or accuracy criteria.
Therefore, there is a need to better locate the system in the context of AR and VR devices.
Disclosure of Invention
Embodiments of the present invention relate to devices, systems, and methods for facilitating virtual reality and/or augmented reality interactions for one or more users.
In one embodiment, an Augmented Reality (AR) display system includes an electromagnetic field emitter for emitting a known magnetic field in a known coordinate system. The system also includes an electromagnetic sensor for measuring a parameter related to a magnetic flux at the electromagnetic sensor resulting from the known magnetic field. The system further comprises a depth sensor for measuring distances in the known coordinate system. Additionally, the system includes a controller for determining pose information of the electromagnetic sensor relative to the electromagnetic field emitter in the known coordinate system based at least in part on the parameter related to the magnetic flux measured by the electromagnetic sensor and the distance measured by the depth sensor. Further, the system includes a display system for displaying virtual content to a user based at least in part on the pose information of the electromagnetic sensor relative to the electromagnetic field emitter.
In one or more embodiments, the depth sensor is a passive stereo depth sensor.
In one or more embodiments, the depth sensor is an active depth sensor. The depth sensor may be a texture projection stereo depth sensor, a structured light projection stereo depth sensor, a time-of-flight depth sensor, a laser radar (LIDAR) depth sensor, or a modulated emission depth sensor.
In one or more embodiments, the depth sensor includes a depth camera having a first field of view (FOV). The AR display system may also include a world capture camera, wherein the world capture camera has a second FOV that at least partially overlaps the first FOV. The AR display system may also include a picture camera, wherein the picture camera has a third FOV that at least partially overlaps the first FOV and the second FOV. The depth camera, the world capture camera, and the picture camera may have respective different first, second, and third resolutions. The first resolution of the depth camera may be sub-VGA (sub-VGA), the second resolution of the world capture camera may be 720p, and the third resolution of the picture camera may be 2 megapixels.
In one or more embodiments, the depth camera, the world capture camera, and the picture camera are configured to capture respective first, second, and third images. The controller may be programmed to segment the second image and the third image. The controller may be programmed to fuse the second image and the third image after segmenting the second image and the third image to produce a fused image. Measuring distances in the known coordinate system may include generating a hypothetical distance by analyzing the first image from the depth camera; and generating the distance by analyzing the imaginary distance and the fused image. The depth camera, the world capture camera, and the picture camera form a single integrated sensor.
In one or more embodiments, the AR display system further includes additional positioning resources to provide additional information. The pose information of the electromagnetic sensor relative to the electromagnetic field emitter in the known coordinate system may be determined based at least in part on a parameter related to the magnetic flux measured by the electromagnetic sensor, the distance measured by the depth sensor, and the additional information provided by the additional positioning resource.
In one or more embodiments, the additional positioning resources may include a WiFi transceiver, an additional electromagnetic transmitter, or an additional electromagnetic sensor. The additional positioning resources may comprise beacons. The beacon may emit radiation. The radiation may be infrared radiation and the beacon may include an infrared LED. The additional positioning resource may comprise a reflector. The reflector may reflect radiation.
In one or more embodiments, the additional localization resources may include a cellular network transceiver, a radar transmitter, a radar detector, a lidar transmitter, a lidar detector, a GPS transceiver, a poster (poster) with a known detectable pattern, a marker (marker) with a known detectable pattern, an inertial measurement unit, or a strain gauge.
In one or more embodiments, the electromagnetic field emitter is coupled to a movable component of the AR display system. The movable component may be a hand-held component, a tote, a head-mounted component housing the display system, a torso-worn component, or a belt pack.
In one or more embodiments, the electromagnetic field emitter is coupled to an object in the known coordinate system such that the electromagnetic field emitter has a known position and a known orientation. The electromagnetic sensor may be coupled to a movable component of the AR display system. The movable component may be a hand-held component, a tote, a head-mounted component housing the display system, a torso-worn component, or a belt pack.
In one or more embodiments, the pose information comprises a position and orientation of the electromagnetic sensor relative to the electromagnetic field emitter in the known coordinate system. The controller may analyze the pose information to determine a position and orientation of the electromagnetic sensor in the known coordinate system.
In another embodiment, a method for displaying augmented reality includes transmitting a known magnetic field in a known coordinate system using an electromagnetic field transmitter. The method also includes measuring a parameter related to a magnetic flux at the electromagnetic sensor resulting from the known magnetic field using an electromagnetic sensor. The method further includes measuring distances in the known coordinate system using a depth sensor. Additionally, the method includes determining pose information of the electromagnetic sensor relative to the electromagnetic field emitter in the known coordinate system based at least in part on the parameter related to the magnetic flux measured using the electromagnetic sensor and the distance measured using the depth sensor. Further, the method includes displaying virtual content to a user based at least in part on the pose information of the electromagnetic sensor relative to the electromagnetic field emitter.
In one or more embodiments, the depth sensor is a passive stereo depth sensor.
In one or more embodiments, the depth sensor is an active depth sensor. The depth sensor may be a texture projection stereo depth sensor, a structured light projection stereo depth sensor, a time-of-flight depth sensor, a lidar depth sensor, or a modulated emission depth sensor.
In one or more embodiments, the depth sensor includes a depth camera having a first field of view (FOV). The depth sensor may also include a world capture camera, wherein the world capture camera has a second FOV that at least partially overlaps the first FOV. The depth sensor may also include a picture camera, wherein the picture camera has a third FOV that at least partially overlaps the first FOV and the second FOV. The depth camera, the world capture camera, and the picture camera may have respective different first, second, and third resolutions. The first resolution of the depth camera may be a sub-VGA, the second resolution of the world capture camera may be 720p, and the third resolution of the picture camera may be 2 megapixels.
In one or more embodiments, the method further includes capturing the first image, the second image, and the third image using respective depth cameras, world capture cameras, and picture cameras. The method may further comprise segmenting the second image and the third image. The method may further include fusing the second image and the third image after segmenting the second image and the third image to generate a fused image. Measuring distances in the known coordinate system may include generating a hypothetical distance by analyzing the first image from the depth camera; and generating the distance by analyzing the imaginary distance and the fused image. The depth camera, the world capture camera, and the picture camera may form a single integrated sensor.
In one or more embodiments, the method further comprises determining the pose information of the electromagnetic sensor relative to the electromagnetic field emitter in the known coordinate system based at least in part on the parameter related to the magnetic flux measured using the electromagnetic sensor, the distance measured using the depth sensor, and additional information provided by additional positioning resources.
In one or more embodiments, the additional positioning resources may include a WiFi transceiver, an additional electromagnetic transmitter, or an additional electromagnetic sensor. The additional positioning resources may include beacons. The method may further include transmitting a radiated beacon. The radiation may be infrared radiation and the beacon may include an infrared LED. The additional positioning resource may comprise a reflector. The method may further comprise a reflector that reflects the radiation.
In one or more embodiments, the additional localization resources may include a cellular network transceiver, a radar transmitter, a radar detector, a lidar transmitter, a lidar detector, a GPS transceiver, a poster with a known detectable pattern, a marker with a known detectable pattern, an inertial measurement unit, or a strain gauge.
In one or more embodiments, the electromagnetic field emitter is coupled to a movable component of the AR display system. The movable component may be a hand-held component, a tote, a head-mounted component housing the display system, a torso-worn component, or a belt pack.
In one or more embodiments, the electromagnetic field emitter is coupled to an object in the known coordinate system such that the electromagnetic field emitter has a known position and a known orientation. The electromagnetic sensor may be coupled to a movable component of the AR display system. The movable component may be a hand-held component, a tote, a head-mounted component housing the display system, a torso-worn component, or a belt pack.
In one or more embodiments, the pose information includes a position and orientation of the electromagnetic sensor relative to the electromagnetic field emitter in the known coordinate system. The method may further include analyzing the pose information to determine a position and orientation of the electromagnetic sensor in the known coordinate system.
In yet another embodiment, an augmented reality display system includes a handheld component coupled to an electromagnetic field emitter that emits a magnetic field. The system also includes a head-mounted component having a display system that displays virtual content to a user. The headset is coupled to an electromagnetic sensor that measures a parameter related to a magnetic flux at the electromagnetic sensor resulting from the magnetic field, wherein a head pose of the headset in a known coordinate system is known. The system further includes a depth sensor that measures distances in the known coordinate system, and additionally includes a controller communicatively coupled to the handheld component, the head-mounted component, and the depth sensor. The controller receives the parameter related to magnetic flux at the electromagnetic sensor from the headset and receives the distance from the depth sensor. The controller determines a hand pose of the hand-held component based at least in part on the parameter related to magnetic flux measured by the electromagnetic sensor and the distance measured by the depth sensor. The system modifies the virtual content displayed to the user based at least in part on the hand gesture.
In one or more embodiments, the depth sensor is a passive stereo depth sensor.
In one or more embodiments, the depth sensor is an active depth sensor. The depth sensor may be a texture projection stereo depth sensor, a structured light projection stereo depth sensor, a time-of-flight depth sensor, a lidar depth sensor, or a modulated transmission depth sensor.
In one or more embodiments, the depth sensor includes a depth camera having a first field of view (FOV). The AR display system may also include a world capture camera, wherein the world capture camera has a second FOV that at least partially overlaps the first FOV. The AR display system may also include a picture camera, wherein the picture camera has a third FOV that at least partially overlaps the first FOV and the second FOV. The depth camera, the world capture camera, and the picture camera may have respective different first, second, and third resolutions. The first resolution of the depth camera may be a sub-VGA, the second resolution of the world capture camera may be 720p, and the third resolution of the picture camera may be 2 megapixels.
In one or more embodiments, the depth camera, the world capture camera, and the picture camera are configured to capture respective first, second, and third images. The controller may be programmed to segment the second image and the third image. The controller may be programmed to fuse the second image and the third image after segmenting the second image and the third image to produce a fused image. Measuring the distance in the known coordinate system may include generating a hypothetical distance by analyzing the first image from the depth camera and generating the distance by analyzing the hypothetical distance and the fused image. The depth camera, the world capture camera, and the picture camera may form a single integrated sensor.
In one or more embodiments, the AR display system further includes additional positioning resources to provide additional information. The controller determines the hand pose of the hand-held component based at least in part on the parameter related to magnetic flux measured by the electromagnetic sensor, the distance measured by the depth sensor, and the additional information provided by the additional positioning resource.
In one or more embodiments, the additional positioning resources may include a WiFi transceiver, an additional electromagnetic transmitter, or an additional electromagnetic sensor. The additional positioning resources may comprise beacons. The beacon may emit radiation. The radiation may be infrared radiation and the beacon may include an infrared LED. The additional positioning resource may comprise a reflector. The reflector may reflect radiation.
In one or more embodiments, the additional localization resources may include a cellular network transceiver, a radar transmitter, a radar detector, a lidar transmitter, a lidar detector, a GPS transceiver, a poster with a known detectable pattern, a marker with a known detectable pattern, an inertial measurement unit, or a strain gauge.
In one or more embodiments, the electromagnetic field-enabled component is totem. The hand pose information may include a position and orientation of the hand held component in the known coordinate system.
Additional or other objects, features and advantages of the present invention are described in the detailed description, drawings and claims.
Drawings
The drawings illustrate the design and utility of various embodiments of the present invention. It should be noted that the figures are not drawn to scale and that elements of similar structure or function are represented by like reference numerals throughout the figures. In order to better appreciate how the above-recited and other advantages and objects of various embodiments of the present invention are obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings, in which:
FIG. 1 illustrates a plan view of an AR scene displayed to a user of an AR system, according to one embodiment.
FIGS. 2A-2D illustrate various embodiments of a wearable AR device
Fig. 3 illustrates an exemplary embodiment of a wearable AR device interacting with one or more cloud servers of an AR system.
FIG. 4 illustrates an exemplary embodiment of an electromagnetic tracking system.
FIG. 5 illustrates an exemplary method of determining the position and orientation of a sensor according to one exemplary embodiment.
FIG. 6 illustrates an exemplary embodiment of an AR system having an electromagnetic tracking system.
FIG. 7 illustrates an exemplary method of delivering virtual content to a user based on detected head gestures.
FIG. 8 illustrates a schematic diagram of various components of an AR system having an electromagnetic emitter and an electromagnetic sensor, in accordance with one embodiment.
Fig. 9A to 9F illustrate various embodiments of the control and quick release module.
Fig. 10 illustrates a simplified embodiment of a wearable AR device.
Fig. 11A and 11B illustrate various embodiments of placement of electromagnetic sensors on a head mounted AR system.
Fig. 12A to 12E illustrate various embodiments of ferrite cubes to be coupled to an electromagnetic sensor.
Fig. 13A to 13C illustrate various embodiments of a data processor for an electromagnetic sensor.
FIG. 14 illustrates an exemplary method of detecting head and hand gestures using an electromagnetic tracking system.
FIG. 15 illustrates another exemplary method of detecting head and hand gestures using an electromagnetic tracking system.
FIG. 16A illustrates a schematic diagram of various components of an AR system having a depth sensor, an electromagnetic emitter, and an electromagnetic sensor, in accordance with another embodiment.
FIG. 16B illustrates a schematic diagram of various components and various fields of view of an AR system having a depth sensor, an electromagnetic emitter, and an electromagnetic sensor, in accordance with yet another embodiment.
Detailed Description
Referring to fig. 2A through 2D, some general component options are illustrated. In portions of the detailed description following the discussion of fig. 2A-2D, various systems, subsystems, and components are presented to achieve the goal of a display system that provides high-quality, comfortable perception for human VRs and/or ARs.
As shown in fig. 2A, an AR system user (60) is shown wearing a head-mounted component (58), the head-mounted component (58) featuring a frame (64) structure coupled with a display system (62) located in front of the user's eyes. A speaker (66) is coupled to the frame (64) in the configuration shown and is located near the ear canal of the user (in one embodiment, another speaker (not shown) is located near another ear canal of the user to provide stereo/shapeable sound control). The display (62) is operatively coupled (68), such as by wired leads or a wireless connection, to a local processing and data module (70), and the local processing and data module (70) may be mounted in various configurations, such as being fixedly attached to the frame (64), fixedly attached to a helmet or hat (80) as shown in the embodiment of fig. 2B, embedded within a headset, removably attached to the torso (82) of the user (60) in a backpack-type configuration as shown in the embodiment of fig. 2C, or removably attached to the hips (84) of the user (60) in a band-coupled-type configuration as shown in the embodiment of fig. 2D.
The local processing and data module (70) may include a power efficient processor or controller and digital memory (such as flash memory), both of which may be used to assist in processing, caching and storing the following data: a) data captured from sensors that may be operatively coupled to the frame (64), such as an image capture device (such as a camera), a microphone, an inertial measurement unit, an accelerometer, a compass, a GPS unit, a radio, and/or a gyroscope; and/or b) data acquired and/or processed using a remote processing module (72) and/or a remote data repository (74), which may be transmitted to the display (62) after such processing or retrieval. The local processing and data module (70) may be operatively coupled (76, 78), such as via a wired or wireless communication link, to a remote processing module (72) and a remote data repository (74) such that these remote modules (72, 74) are operatively coupled to each other and may serve as resources for the local processing and data module (70).
In one embodiment, the remote processing module (72) may include one or more relatively powerful processors or controllers configured to analyze and process data and/or image information. In one embodiment, the remote data repository (74) may comprise a relatively large-scale digital data storage facility that may be available through the internet or other network configuration in a "cloud" resource configuration. In one embodiment, all data is stored and all calculations are performed in the local processing and data module, allowing for fully autonomous use from any remote module.
Referring now to fig. 3, coordination between the cloud computing assets (46) and local processing assets is schematically illustrated, which may reside, for example, in a head-mounted component (58) coupled to the user's head (120) and a local processing and data module (70) coupled to the user's belt (308; thus the component 70 may also be referred to as a "belt pack" 70), as shown in fig. 3. In one embodiment, cloud (46) assets, such as one or more server systems (110), are operably coupled (115) directly (40, 42), such as via wired or wireless networks (wireless preferred for mobility, wired preferred for certain high bandwidth or high data volume transfers that may be required), to one or both of local computing assets (e.g., processor and memory configurations coupled to a user's head (120) and belt (308) as described above). These computing assets local to the user may also be operatively coupled to one another via wired and/or wireless connection configurations (44), such as wired couplings (68) discussed below with reference to fig. 8. In one embodiment, to maintain a low inertia and small size subsystem mounted on the user's head (120), the primary transmission between the user and the cloud (46) may be via a link between the subsystem mounted at the belt (308) and the cloud, where the head-mounted (120) subsystem primarily tethers data to the belt (308) based subsystem using a wireless connection, such as an ultra-wideband ("UWB") connection, as is currently employed, for example, in personal computing peripheral connection applications.
Through effective local and remote process coordination and appropriate display devices for the user, such as a user interface or user display system (62) shown in fig. 2A or variations thereof, various aspects of a world relating to the user's current actual or virtual location may be transmitted or "passed" to the user and updated in an efficient manner. In other words, the map of the world may be continuously updated at a storage location that may reside partially on the user's AR system and partially in a cloud resource. The map (also referred to as a "deliverable world model") may be a large database comprising raster images, 3-D and 2-D points, parametric information, and other information about the real world. As more and more AR users continue to capture information about their real environment (e.g., via cameras, sensors, IMUs, etc.), maps become more and more accurate and sophisticated.
By the above-described configuration in which there is one world model that may reside on and be distributed from cloud computing resources, such a world may be "deliverable" to one or more users with relatively low bandwidth, preferably attempting to distribute real-time video data or the like. The enhanced experience of people standing near the statue (i.e., as shown in FIG. 1) may be informed by a cloud-based world model, a subset of which may be passed to them and their local display devices to complete the view. A person located at a remote display device may be as simple as a personal computer located on a desk, and may effectively download the same portion of information from the cloud and present that portion of information on their display. In fact, a person actually present in a park near a figurine may walk around that park with a remotely located friend who is joined through virtual and augmented reality. The system would need to know the location of the street, the location of the trees, the location of the statue, but with this information on the cloud, joined friends can download aspects of the scene from the cloud and then begin walking together as augmented reality locally relative to the person actually in the park.
3-D points may be captured from the environment, and the pose of the camera capturing the images or points (i.e., vector and/or origin position information relative to the world) may be determined so that the points or images may be "tagged" with, or associated with, the pose information. The point captured by the second camera may then be utilized to determine the pose of the second camera. In other words, the second camera may be oriented and/or positioned based on a comparison with the marked image from the first camera. These knowledge can then be used to extract textures, map them, and create a virtual copy of the real world (since there are two registered cameras around then).
Thus, at a basic level, in one embodiment, a system worn by a person may be utilized to capture 3-D points and generate 2-D images of those points, and those points and images may be sent out to cloud storage and processing resources. They may also be cached locally with embedded gesture information (i.e., caching tagged images); thus, the cloud may have a 2-D image (i.e., tagged with a 3-D gesture) and 3-D points that are ready (i.e., in available cache). If the user is observing something that is dynamic, he can also send information related to motion to the cloud (e.g., if the user is looking at the face of another person, the user can take a texture map of the face and push it to an optimized frequency, even if the surrounding world is substantially static). More information about object recognizers and deliverable world models can be found in U.S. patent application serial No. 14/205,126, entitled "System and method for augmented and virtual reality," which is incorporated herein by reference in its entirety, and the following additional disclosures on augmented reality systems and virtual reality systems such as those developed by jejunum located in ladwaldlburg, florida: U.S. patent application serial No. 14/641,376; U.S. patent application serial No. 14/555,585; U.S. patent application serial No. 14/212,961; U.S. patent application serial No. 14/690,401; U.S. patent application serial No. 13/663,466; and U.S. patent application serial No. 13/684,489.
To capture the points that can be used to create a "transferable world model," it is helpful to accurately know the position, pose, and orientation of the user relative to the world. More particularly, the user's position must be localized to granularity, as it may be very important to know the user's head pose as well as hand pose (if the user is grasping a handheld, making a gesture, etc.). In one or more embodiments, GPS and other positioning information may be used as inputs to such processing. Highly accurate positioning of a user's head, totem, hand gestures, haptic devices, etc., is critical to displaying appropriate virtual content to the user.
One approach to achieving high precision positioning may involve the use of electromagnetic fields coupled with electromagnetic sensors strategically placed on the user's AR headpiece, belt pack, and/or other auxiliary devices (e.g., totems, haptic devices, game tools, etc.). Electromagnetic tracking systems typically include at least one electromagnetic field emitter and at least one electromagnetic field sensor. The sensor may measure an electromagnetic field having a known profile. Based on these measurements, the position and orientation of the field sensor relative to the emitter is determined.
Referring now to FIG. 4, there is illustrated an exemplary system diagram of an electromagnetic tracking system (e.g., developed by organizations such as Biosense (RTM) division Johnson & Johnson Corporation, Polhemus (RTM) located in Coltschest, Budd., Inc., an electromagnetic tracking system manufactured by Sixense (RTM) Enteratinment, Inc. and other tracking companies located in Ross Gandos, Calif.). In one or more embodiments, the electromagnetic tracking system includes an electromagnetic field transmitter 402 configured to transmit a known magnetic field. As shown in fig. 4, the electromagnetic field transmitter may be coupled to a power source (e.g., current, battery, etc.) to provide power to the transmitter 402.
In one or more embodiments, the electromagnetic field transmitter 402 includes several coils that generate magnetic fields (e.g., at least three coils positioned perpendicular to each other to generate fields in the x, y, and z directions). The magnetic field is used to establish a coordinate space. This allows the system to map (map) the position of the sensor relative to a known magnetic field and help determine the position and/or orientation of the sensor. In one or more embodiments, electromagnetic sensors 404a, 404b, etc. may be attached to one or more real objects. The electromagnetic sensor 404 may include smaller coils in which a current may be induced by the emitted electromagnetic field. In general, the "sensor" component (404) may comprise a small coil or ring, for example a set of three differently oriented coils coupled together (i.e., such as orthogonally oriented with respect to each other) within a small structure such as a cube or other container, positioned/oriented to capture the incoming magnetic flux from the magnetic field emitted by the transmitter (402), and by comparing the currents induced via the coils and knowing the relative positioning and orientation of the coils with respect to each other, the relative position and orientation of the sensor with respect to the transmitter may be calculated.
One or more parameters related to the behavior of coils and inertial measurement unit ("IMU") components operatively coupled to the electromagnetic tracking sensor may be measured to detect the position and/or orientation of the sensor (and the object to which the sensor is attached) relative to a coordinate system to which the electromagnetic field emitter is coupled. Of course, the coordinate system may be converted into a world coordinate system in order to determine the position or posture of the electromagnetic field emitter in the real world. In one or more embodiments, multiple sensors may be used with respect to the electromagnetic transmitter to detect the position and orientation of each sensor within the coordinate space.
It should be understood that in some embodiments, the head pose may have been known based on sensors on the head mounted part of the AR system, and SLAM analysis performed based on sensor data and image data captured by the head mounted AR system. However, it may be important to know the position of the user's hand (e.g., a hand-held component such as totems) relative to a known head pose. In other words, it may be important to know the hand pose relative to the head pose. Once the relationship between the head (assuming the sensors are placed on the head-mounted component) and the hand is known, the position of the hand relative to the world (e.g., world coordinates) can be easily calculated.
The electromagnetic tracking system may provide positions in three directions (i.e., X, Y and the Z direction), and also provide positions in two or three orientation angles. In one or more embodiments, measurements of the IMU may be compared to measurements of the coils to determine the position and orientation of the sensor. In one or more embodiments, both Electromagnetic (EM) data and IMU data, as well as various other data sources such as cameras, depth sensors, and other sensors, may be combined to determine position and orientation. This information may be sent (e.g., wireless communication, bluetooth, etc.) to the controller 406. In one or more embodiments, gestures (or positions and orientations) may be reported at a relatively high refresh rate in conventional systems. Traditionally, an electromagnetic transmitter is coupled to a relatively stable large object, such as a table, operating table, wall, or ceiling, and one or more sensors are coupled to a smaller object, such as a medical device, handheld game component, or the like. Alternatively, as described below with reference to FIG. 6, various features of an electromagnetic tracking system may be employed to produce such a configuration: wherein changes or differences in position and/or orientation between two objects moving in space can be tracked relative to a more stable global coordinate system; in other words, such a configuration is shown in fig. 6: among other things, changes in the electromagnetic tracking system may be used to track positional and orientation differentials between the headset and the handheld while determining head pose relative to a global coordinate system (e.g., a room environment local to the user) in other ways, such as by using simultaneous positioning and mapping ("SLAM") techniques of an outward-capturing camera that may be coupled to the headset of the system.
The controller 406 may control the electromagnetic field generator 402 and may also capture data from the various electromagnetic sensors 404. It should be understood that the various components of the system may be coupled to each other by any electromechanical or wireless/bluetooth means. The controller 406 may also include data regarding known magnetic fields and coordinate spaces associated with the magnetic fields. This information is then used to detect the position and orientation of the sensor relative to a coordinate space corresponding to the known electromagnetic field.
One advantage of electromagnetic tracking systems is that they produce highly accurate tracking results with minimal delay and high resolution. In addition, electromagnetic tracking systems do not necessarily rely on optical trackers, and can easily track sensors/objects that are not in the line of sight of the user.
It will be appreciated that the strength of the electromagnetic field v decreases as a cubic function of the distance r from the coil transmitter (e.g., electromagnetic field transmitter 402). Thus, algorithms based on distance from the electromagnetic field emitter may be needed. The controller 406 may be configured with such algorithms to determine the position and orientation of the sensor/object at different distances from the electromagnetic field emitter. Given the rapid decrease in electromagnetic field strength as one moves farther away from the electromagnetic transmitter, the best results in terms of accuracy, efficiency, and low latency can be achieved at closer distances. In a typical electromagnetic tracking system, an electromagnetic field transmitter is powered by an electric current (e.g., a plug-in power supply) and has a sensor located within a radius of 20 feet from the electromagnetic field transmitter. In many applications, including AR applications, it is more desirable that the radius between the sensor and the field emitter be short.
Referring now to FIG. 5, an exemplary flow chart describing the functionality of a typical electromagnetic tracking system is briefly described. At 502, a known electromagnetic field is emitted. In one or more embodiments, the magnetic field transmitter may generate a magnetic field, and each coil may generate an electric field in one direction (e.g., x, y, or z). The magnetic field may be generated in an arbitrary waveform. In one or more embodiments, each axis may oscillate at a slightly different frequency. At 504, a coordinate space corresponding to the electromagnetic field may be determined. For example, controller 406 of FIG. 4 may automatically determine a coordinate space around the transmitter based on the electromagnetic field. At 506, the behavior of the coil at the sensor (which may be attached to a known object) may be detected. For example, the current induced at the coil may be calculated. In other embodiments, the rotation of the coil or any other quantifiable behavior may be tracked and measured. At 508, the behavior may be used to detect the position and orientation of the sensor and/or the known object. For example, the controller 406 may consult a mapping table that correlates the behavior of the coil at the sensor to various positions or orientations. Based on these calculations, the position and orientation of the sensor in the coordinate space can be determined. In some embodiments, the gesture/position information may be determined at a sensor. In other embodiments, the sensor communicates data detected at the sensor to the controller, and the controller may query the mapping table to determine pose information relative to a known magnetic field (e.g., coordinates relative to the handheld component).
In the context of AR systems, it may be necessary to modify one or more components of the electromagnetic tracking system in order to accurately track the movable component. As mentioned above, tracking the head pose and orientation of a user is crucial in many AR applications. Accurate determination of the user's head pose and orientation allows the AR system to display the correct virtual content to the user. For example, a virtual scene may include monsters hidden behind real buildings. Depending on the pose and orientation of the user's head relative to the building, the view of the virtual monster may need to be modified in order to provide a realistic AR experience. Alternatively, the location and/or orientation of a totem, a haptic device, or some other means of interacting with virtual content may be important to enable an AR user to interact with the AR system. For example, in many gaming applications, the AR system must detect the position and orientation of a real object relative to the virtual content. Alternatively, when a virtual interface is displayed, the position of the totem, the user's hands, the haptic device, or any other real object configured to interact with the AR system, relative to the displayed virtual interface must be known in order for the system to understand commands, etc. Conventional positioning methods, including optical tracking and other methods, are often plagued with high latency and low resolution issues, which make rendering virtual content challenging in many augmented reality applications.
In one or more embodiments, the electromagnetic tracking system discussed with respect to fig. 4 and 5 may be adapted to an AR system to detect the position and orientation of one or more objects relative to the emitted electromagnetic field. Typical electromagnetic systems tend to have large and bulky electromagnetic emitters (e.g., 402 in fig. 4), which is problematic for AR devices. However, smaller electromagnetic transmitters (e.g., in the millimeter range) may be used to transmit known electromagnetic fields in the context of AR systems.
Referring now to FIG. 6, an electromagnetic tracking system may be incorporated with the AR system shown, with an electromagnetic field transmitter 602 incorporated as part of a handheld controller 606. In one or more embodiments, the hand-held controller may be a totem used in a game scene. In other embodiments, the handheld controller may be a haptic device. In further embodiments, the electromagnetic field emitter may simply be incorporated as part of the belt pack 70. The hand-held controller 606 may include a battery 610 or other power source that powers the electromagnetic field emitter 602. It should be understood that the electromagnetic field transmitter 602 may also include or be coupled to IMU 650 components, the IMU 650 components configured to help determine the position and/or orientation of the electromagnetic field transmitter 602 relative to other components. This is especially important where both the field transmitter 602 and the sensor (604) are movable. As shown in the embodiment of fig. 6, placing the electromagnetic field transmitter 602 in the handheld controller rather than in the belt pack ensures that the electromagnetic field transmitter does not contend for resources at the belt pack, but rather uses its own battery source at the handheld controller 606.
In one or more embodiments, the electromagnetic sensors (604) may be placed at one or more locations on the user's headgear (58) as well as other sensing devices, such as one or more IMUs or additional magnetic flux capture coils (608). For example, as shown in fig. 6, the sensors (604, 608) may be placed on either side of the headset (58). Since these sensors (604, 608) are designed to be relatively small (and thus may be less sensitive in some cases), having multiple sensors may improve efficiency and accuracy.
In one or more embodiments, one or more sensors may also be placed on the belt pack (620) or any other part of the user's body. The sensors (604, 608) may communicate wirelessly or by bluetooth with a computing device (607, e.g., a controller) that determines the pose and orientation of the sensors (604, 608) (and the AR headset (58) to which they are attached relative to a known magnetic field emitted by the electromagnetic field emitter (602)). In one or more embodiments, the computing device (607) may reside at the belt pack (620). In other embodiments, the computing device (607) may reside at the headset (58) itself, even at the handheld controller (606). A computing device (607) may receive measurements of the sensors (604, 608) and determine a position and orientation of the sensors (604, 608) relative to a known electromagnetic field emitted by the electromagnetic field emitter (602).
In one or more embodiments, the computing device (607) may, in turn, include a mapping database (632; e.g., a transferable world model, coordinate space, etc.) to detect gestures to determine coordinates of real and virtual objects, and may even be connected to cloud resources (630) and the transferable world model. The mapping database (632) may be consulted to determine the location coordinates of the sensors (604, 608). In some embodiments, the mapping database (632) may reside in the belt pack (620). In the embodiment shown in fig. 6, mapping database (632) resides on cloud resources (630). The computing device (607) communicates wirelessly with the cloud resources (630). The determined pose information along with the points and images collected by the AR system may then be transmitted to the cloud resources (630) and then added to the transitive world model (634).
As mentioned above, conventional electromagnetic transmitters may be too bulky for AR devices. Thus, the electromagnetic field transmitter can be designed as a compact transmitter using smaller coils than conventional systems. However, given that the strength of the electromagnetic field decreases as a cubic function of the distance from the field emitter, a shorter radius (e.g., about 3-3.5 feet) between the electromagnetic sensor 604 and the electromagnetic field emitter 602 may reduce power consumption as compared to conventional systems, such as the system detailed in fig. 4.
In one or more embodiments, this aspect may be used to extend the life of a battery 610 that may power the controller 606 and the electromagnetic field transmitter 602. Alternatively, in other embodiments, this aspect may be used to reduce the size of the coil that generates the magnetic field at the electromagnetic field transmitter 602. However, to obtain the same magnetic field strength, the power may need to be increased. This allows the use of a compact electromagnetic field transmitter unit 602 that can be compactly mounted on the hand-held controller 606.
When an electromagnetic tracking system is used for an AR device, several other changes may be made. While this rate of posture reporting is reasonably good, AR systems may require a more efficient rate of posture reporting. For this purpose, IMU-based pose tracking may be used in the sensors. It is crucial that the IMU must remain as stable as possible to improve the efficiency of the pose detection process. The IMU may be designed to remain stable for a period of up to 50 to 100 milliseconds. It should be understood that some embodiments may utilize an external pose estimator module (i.e., the IMU may drift over time) that enables pose updates to be reported at a rate of 10 to 20 Hz. By keeping the IMU stable at a reasonable rate, the rate of posture update can be significantly reduced to 10 to 20Hz (compared to the higher frequencies in conventional systems).
If the electromagnetic tracking system can ping at a 10% duty cycle (e.g., only true every 100 milliseconds), this would be another way to save power at the AR system. This means that the electromagnetic tracking system wakes up every 10 milliseconds out of every 100 milliseconds to produce a pose estimate. This translates directly into power savings, which in turn may affect the size, battery life, and cost of the AR device.
In one or more embodiments, this reduction in duty cycle may be strategically utilized by providing two handheld controllers (not shown) instead of only one. For example, the user may be playing a game that requires two totems, or the like. Alternatively, in a multi-user game, two users may have their own totem/handheld controllers to play the game. When two controllers (e.g., symmetric controllers for each hand) are used instead of one, the controllers may perform operations at offset duty cycles. For example, the same concept may also be applied to controllers used by two different users playing a multiplayer game.
Referring now to FIG. 7, an exemplary flow chart describing an electromagnetic tracking system in the context of an AR device is described. At 702, a handheld controller emits a magnetic field. At 704, an electromagnetic sensor (placed on a headset, a belt pack, etc.) detects the magnetic field. At 706, a position and orientation of the headset/belt is determined based on the behavior of the coils/IMUs at the sensors. At 708, the pose information is transmitted to a computing device (e.g., located at a belt pack or a head-mounted device). At 710, optionally, a mapping database (e.g., a transitive world model) can be consulted to associate the real world coordinates with the virtual world coordinates. At 712, the virtual content may be delivered to the user at the AR headset. It should be understood that the above-described flow diagrams are for illustration purposes only and should not be construed as limiting.
Advantageously, gesture tracking (e.g., head position and orientation, totem, and other controller positions and orientations) is enabled using an electromagnetic tracking system similar to that outlined in fig. 6. This allows the AR system to project virtual content with higher accuracy and very low delay compared to optical tracking techniques.
Referring now to FIG. 8, a system configuration is illustrated, wherein the system configuration features a number of sensing components. The head-worn wearable component (58) is shown operatively coupled (68) to a local processing and data module (70), such as a belt pack, here coupled using a physical multi-wire lead that also features the control and quick release module (86) described below with reference to fig. 9A-9F. The local processing and data module (70) is operatively coupled (100) to the hand-held component (606), here by a wireless connection such as low power bluetooth; the handheld component (606) may also be operably coupled (94) directly to the head-mounted wearable component (58), for example, by a wireless connection such as low-power bluetooth. Typically, where IMU data is communicated to coordinate pose detection of various components, high frequency connections are required, for example in the range of hundreds or thousands of cycles/second or higher; tens of cycles per second may be sufficient for electromagnetic position sensing, such as by sensor (604) and transmitter (602) pairing. A global coordinate system (10) is also shown, representing fixed objects in the real world around the user, such as walls (8). The cloud resources (46) may also be operatively coupled (42, 40, 88, 90) to local processing and data modules (70), to head-mounted wearable components (58), to resources that may be coupled to walls (8) or other items that are fixed relative to the global coordinate system (10), respectively. Resources coupled to a wall (8) or having a known position and/or orientation relative to a global coordinate system (10) may include a WiFi transceiver (114), an electromagnetic transmitter (602) and/or receiver (604), a beacon or reflector (112) configured to transmit or reflect a given type of radiation, such as an infrared LED beacon, a cellular network transceiver (110), a radar transmitter or detector (108), a lidar transmitter or detector (106), a GPS transceiver (118), a poster or marker having a known detectable pattern (122), and a camera (124). The head-mounted wearable component (58) has similar features of the components as exemplified, in addition to a light emitter (130) configured as an auxiliary camera (124) detector, such as an infrared emitter (130) for an infrared camera (124); also featured on the head-mounted wearable component (58) are one or more strain gauges (116), which strain gauges (116) may be fixedly coupled to a frame or mechanical platform of the head-mounted wearable component (58) and configured to determine deflections of such platform in between components such as electromagnetic receiver sensors (604) or display elements (62), where it may be valuable to know if the platform is flexed, such as at a thinner portion of the platform (such as the portion above the nose on the spectacle-like platform shown in fig. 8). The head-worn wearable component (58) also features a processor (128) and one or more IMUs (102). Each component is preferably operatively coupled to a processor (128). The hand-held component (606) and the local processing and data module (70) are illustrated as having similar component features. With so many sensing and connection devices, the system can be heavy, power consuming, bulky and relatively expensive, as shown in fig. 8. However, for illustrative purposes, such systems may be utilized to provide a very high level of connectivity, system component integration, and position/orientation tracking. For example, with such a configuration, the various primary movable components (58, 70, 606) may be located in terms of position relative to a global coordinate system using WiFi, GPS, or cellular signal triangulation; beacons, electromagnetic tracking (as described above), radar, and lidar systems may provide even further position and/or orientation information and feedback. The markers and cameras may also be used to provide further information about relative and absolute position and orientation. For example, various camera components (124), such as those shown coupled to the head-mounted wearable component (58), may be used to capture data that may be used in a simultaneous localization and mapping protocol or "SLAM" to determine where the component (58) is located and how the component (58) is oriented relative to other components.
Referring now to fig. 9A through 9F, various aspects of the control and quick release module (86) are illustrated. Referring to fig. 9A, two housing parts are coupled together using a magnetic coupling configuration that may be enhanced by a mechanical lock. A button (136) for operating an associated system may be included. Fig. 9B illustrates a partial cross-sectional view showing the button (136) and the underlying top printed circuit board (138). Referring to fig. 9C, with the button (136) and underlying top printed circuit board (138) removed, the array of female contact pins (140) can be seen. Referring to fig. 9D, with the opposite portion of the housing (134) removed, the lower printed circuit board (142) can be seen. With the lower printed circuit board (142) removed, as shown in fig. 9E, the array of male contact pins (144) can be seen. Referring to the cross-sectional view of fig. 9F, at least one of the male or female pins is configured to be spring loaded such that they may be depressed along the longitudinal axis of each pin; these pins may be referred to as "spring pins" and typically comprise a highly conductive material, such as copper or gold. When assembled, the illustrated configuration causes the male pins to mate with the female pins 46, and the entire assembly can be quickly released decoupled in half by manually pulling it apart and overcoming the magnetic interface (146) load, which can be created using north and south magnets oriented around the perimeter of the pin arrays (140, 144). In one embodiment, a load of about 2kg created by compressing 46 spring pins is offset by a closure holding force of about 4 kg. The pins in the array may be separated by approximately 1.3mm, which may be operably coupled to various types of conductors, such as twisted pairs or other combinations to support USB 3.0, HDMI 2.0, I2S signals, GPIO and MIPI configurations, with high current analog and ground lines configured for up to approximately 4 amps/5 volts in one embodiment.
Referring to fig. 10, it is helpful to have a minimized component/feature set to be able to minimize the weight and volume of the various components, and to arrive at a relatively elongated head-mounted component (e.g., such as component (58) in fig. 10). Thus, various permutations and combinations of the various components illustrated in FIG. 8 may be used.
Referring to fig. 11A, an electromagnetic sensing coil assembly (604, i.e., 3 individual coils coupled with a housing) is shown coupled to a headset (58); such a configuration adds additional geometry to the overall assembly, which may be undesirable. Referring to fig. 11B, rather than housing the coils in a box or single housing as in the configuration of fig. 11A, the individual coils may be integrated into various structures of the headset (58), as shown in fig. 11B. For example, the x-axis coil (148) may be placed in a portion of the headset (58) (e.g., the center of the frame). Similarly, the y-axis coil (150) may be placed in another portion of the headset (58; e.g., either bottom side of the frame). Similarly, the z-axis coil (152) may be placed in yet another portion of the headset (58) (e.g., either top side of the frame).
Fig. 12A-12E illustrate various configurations for characterizing a ferrite core coupled to an electromagnetic sensor to improve field sensitivity. Referring to fig. 12A, the ferrite core may be a solid cube (1202). Although the solid cube (1202) may be most effective in improving field sensitivity, it may also be heaviest compared to the remaining configurations shown in fig. 12B through 12E. Referring to fig. 12B, a plurality of ferrite disks (1204) may be coupled to the electromagnetic sensors. Similarly, referring to fig. 12C, a solid cube (1206) with a single axis air core may be coupled to the electromagnetic sensor. As shown in fig. 12C, open spaces (i.e., air cores) may be formed in a solid cube along one axis. This may reduce the weight of the cube while still providing the necessary field sensitivity. In yet another embodiment, referring to fig. 12D, a solid cube (1208) with a three-axis air core may be coupled to the electromagnetic sensor. In this configuration, the solid cube is hollowed out along all three axes, thereby significantly reducing the weight of the cube. Referring to fig. 12E, a ferrite rod (1210) with a plastic housing may also be coupled to the electromagnetic sensor. It should be appreciated that the embodiment of fig. 12B through 12E is lighter in weight than the solid core configuration of fig. 12A and, as noted above, may be used to save mass.
Referring to fig. 13A through 13C, time division multiplexing ("TDM") may also be used to save quality. For example, referring to fig. 13A, a conventional local data processing configuration for a 3-coil electromagnetic receiver sensor is shown, where analog currents come from X, Y and each of the Z-coils (1302, 1304, 1306), enter a preamplifier (1308), enter a band-pass filter (1310), PA (1312), undergo analog-to-digital conversion (1314), and finally enter a digital signal processor (1316). Referring to the transmitter configuration of fig. 13B and the receiver configuration of fig. 13C, the hardware may be shared with time division multiplexing so that each coil sensor chain does not require its own amplifier or the like. This may be accomplished by a TDM switch 1320, as shown in fig. 13B, which facilitates processing of signals from and to multiple transmitters and receivers using the same set of hardware components (amplifiers, etc.). In addition to removing the sensor housing and multiplexing to save hardware overhead, signal-to-noise ratio can also be improved by having more than one set of electromagnetic sensors, each set of electromagnetic sensors being relatively small relative to a single larger coil set; furthermore, the low side frequency limitation, which typically requires having multiple sensing coils in close proximity, can be improved for increased bandwidth requirements. Furthermore, there is a tradeoff with multiplexing, since multiplexing typically spreads the reception of radio frequency signals in time, which results in a generally dirtier signal; thus, a multiplexing system may require a larger coil diameter. For example, in the case where a multiplexing system may require a 9mm side sized cubic coil sensor cartridge, a non-multiplexing system may only require a 7mm side sized cubic coil cartridge to achieve similar performance; therefore, there is a trade-off in minimizing geometry and quality.
In another embodiment, where a particular system component, such as a head-mounted component (58), has two or more electromagnetic coil sensor sets, the system may be configured to selectively utilize the sensor and transmitter pairs that are closest to each other to optimize the performance of the system.
Referring to fig. 14, in one embodiment, after the user powers up his or her wearable computing system (160), the headset assembly may capture a combination of IMU and camera data (the camera data is used, for example, to be SLAM analyzed, such as at a purse processor where there may be more raw processing horsepower) to determine and update the head pose (i.e., position and orientation) relative to the real world global coordinate system (162). The user may also activate the handheld component to, for example, play an augmented reality game (164), and the handheld component may include an electromagnetic transmitter (166) operably coupled to one or both of the belt pack and the head-mounted component. One or more sets of electromagnetic field coil receivers (i.e., each set of 3 differently oriented individual coils) coupled with the headset capture magnetic flux from the transmitter, which may be used to determine a difference in position or orientation (or "delta") between the headset and the handheld (168). The combination of the headset components to assist in determining pose relative to the global coordinate system and the handheld components to assist in determining relative position and orientation of the handheld components relative to the headset components allows the system to approximately determine the position of each component relative to the global coordinate system, and thus the head pose and handheld pose of the user can be tracked, preferably with relatively low latency, to render augmented reality image features and interactions using movements and rotations of the handheld components (170).
Referring to fig. 15, an embodiment somewhat similar to that of fig. 14 is illustrated, except that the system has more sensing devices and configurations available to help determine the pose of the head-mounted (172) and hand-held (176, 178) components, so the user's head pose and hand-held pose can be tracked, preferably with relatively low latency, to render augmented reality image features and interactions using the movements and rotations of the hand-held components (180).
Specifically, after the user powers up (160) his or her wearable computing system, the headset captures a combination of IMU and camera data for SLAM analysis in order to determine and update head gestures relative to a real world global coordinate system. The system may be further configured to detect the presence of other localization resources in the environment, such as Wi-Fi, cellular, beacon, radar, lidar, GPS, markers, and/or other cameras (172) that may be associated with aspects of the global coordinate system or one or more movable components.
The user may also activate the handheld component to, for example, play an augmented reality game (174), and the handheld component may include an electromagnetic transmitter (176) operably coupled to one or both of the belt pack and the head-mounted component. Other positioning resources may also be used in a similar manner. One or more sets of electromagnetic field coil receivers (e.g., 3 differently oriented individual coils per set) coupled with the headset may be used to capture magnetic flux from the electromagnetic transmitter. This captured magnetic flux may be used to determine a difference (or "delta") in position or orientation between the head-mounted component and the hand-held component (178).
Accordingly, the head pose and hand-held pose of the user may be tracked with relatively low latency in order to render AR content and/or interactions with the AR system using movement or rotation of the hand-held component (180).
Referring to fig. 16A and 16B, aspects of a configuration similar to that of fig. 8 are illustrated. The configuration of fig. 16A differs from that of fig. 8 in that, in addition to a lidar (106) type depth sensor, the configuration of fig. 16A also has a general purpose depth camera or depth sensor (154) for illustration purposes, which may be, for example, a stereo triangulation type depth sensor (such as a passive stereo depth sensor, a texture projection stereo depth sensor, or a structured light stereo depth sensor) or a time-of-flight type depth sensor (such as a lidar depth sensor or a modulated transmission depth sensor); in addition, the configuration of fig. 16A has an additional forward facing "world" camera (124, which may be a grayscale camera, a sensor with 720p range resolution) and a relatively high resolution "picture camera" (156, which may be a full color camera, a sensor with 2 megapixels or higher resolution, for example). Fig. 16B illustrates a partial orthogonal view of the configuration of fig. 16A for illustrative purposes, as further described below with reference to fig. 16B.
Referring back to fig. 16A and the above-mentioned stereo and time-of-flight depth sensors, each of these depth sensor types may be used with the wearable computing solution disclosed herein, although each depth sensor type has various advantages and disadvantages. For example, many depth sensors have the challenge of having a black surface and a shiny or reflective surface. Passive stereo depth sensing is a relatively simple way to triangulate depth with depth cameras or sensors to calculate depth, but may be challenging if a wide field of view ("FOV") is required, and may require relatively significant computational resources; furthermore, this type of sensor may have edge detection challenges, which may be very important for the particular use case at hand. Passive stereo may have challenges with non-textured walls, low lighting conditions, and repeating patterns. Passive stereo depth sensors are available from manufacturers such as intel (RTM) and aquifi (RTM). Stereo with texture projection (also called "active stereo") is similar to passive stereo, but the texture projector broadcasts the projected pattern onto the environment, and the more textures that are broadcast, the higher the accuracy that can be achieved in triangulation for depth calculation. Active stereo may also require relatively high computational resources, presents challenges when a wide FOV is required, and is somewhat suboptimal in detecting edges, but it does solve some of the challenges of passive stereo because it is effective for non-textured walls, performs well at low illumination, and is generally free of the problem of repetitive patterns. Active stereo depth sensors are available from manufacturers such as Intel (RTM) and Aquifi (RTM). Stereoscopy with structured light (such as systems developed by Primesense, Inc. (RTM) and available under the trade name kinect (RTM), and systems available from manitis Vision, Inc. (RTM)) typically use a single camera/projector pairing, and the projector is specifically dedicated in that it is configured to broadcast a known a priori pattern of dots. Essentially, the system knows the pattern of the broadcast, and it knows that the variable to be determined is depth. Such a configuration may be relatively efficient in terms of computational load and may present challenges in wide FOV demand scenarios as well as scenarios with patterns of ambient light and broadcast from other nearby devices, but may be very efficient and effective in many scenarios. With a modulated time-of-flight type depth sensor (such as available from PMD Technologies (RTM), a.g., and SoftKinetic Inc. (RTM)), the transmitter may be configured to emit a wave with amplitude modulated light, such as a sine wave; camera components that may be positioned nearby or even overlapping in some configurations receive a return signal on each pixel of the camera component and a depth map may be determined/calculated. Such a configuration may be relatively compact geometrically, highly accurate and computationally burdened, but may present challenges in image resolution (such as at the edges of objects), multipath errors (such as where the sensor is aimed at a corner that is reflective or shiny, and the detector eventually receives more than one return path, so that there is some depth detection aliasing). Direct time-of-flight sensors (which may also be referred to as lidar described above) are available from suppliers such as luminer (RTM) and Advanced Scientific Concepts, Inc. With these time-of-flight configurations, a light pulse (e.g., a picosecond, nanosecond, or femtosecond long light pulse) is typically emitted to bathe the world oriented around it with the light; then each pixel on the camera sensor waits for the pulse to return and knowing the speed of light, the distance at each pixel can be calculated. Such a configuration may have many of the advantages of a modulated time-of-flight sensor configuration (no baseline, relatively wide FOV, high accuracy, relatively low computational load, etc.), and with a relatively high frame rate, e.g., up to tens of thousands of hertz. They may also be relatively expensive, have relatively low resolution, are sensitive to bright light, and are prone to multipath errors; they may also be relatively large and heavy.
Referring to fig. 16B, a partial top view is shown for illustrative purposes featuring a user's eye (12) and a camera (14, such as an infrared camera) having a field of view (28, 30) and a light or radiation source (16, e.g., infrared) directed at the eye (12) to facilitate eye tracking, viewing, and/or image capture. Three outward world capture cameras (124) are shown, with their FOVs (18, 20, 22), as are depth cameras (154) and their FOVs (24) and picture cameras (156) and their FOVs (26). Depth information obtained from a depth camera (154) may be supported by using overlapping FOVs and data from other forward facing cameras. For example, the system may end up with sub-VGA images such as from the depth sensor (154), 720p images from the world camera (124), and occasionally 2 megapixel color images from the picture camera (156). Such a configuration has five cameras sharing a common FOV, three of which have heterogeneous visible spectrum images, one with color and the other with relatively low resolution depth. The system may be configured to segment in grayscale and color images, fuse these images and make relatively high resolution images from them, obtain some stereo correspondences, use a depth sensor to provide assumptions about stereo depth, and use stereo correspondences to obtain a more accurate depth map that is significantly better than a depth map obtained from a depth sensor alone. Such a process may run on local mobile processing hardware, or may run using cloud computing resources, possibly also with data from others in the area (such as two people in close proximity sitting across a table from each other), ultimately resulting in a reasonably accurate mapping. In another embodiment, all of the above sensors may be combined into one integrated sensor to accomplish this function.
Various exemplary embodiments of the present invention are described herein. Reference is made to these examples in a non-limiting sense. These examples are provided to illustrate the broader application aspects of the present invention. Various changes may be made to the invention described and equivalents may be substituted without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation, material, composition of matter, process action or steps, to the objective(s), spirit or scope of the present invention. Furthermore, as will be understood by those of skill in the art, each of the various modifications described and illustrated herein has discrete components and features which may be readily separated from or combined with any of the features of the other several embodiments without departing from the scope or spirit of the present invention. All such modifications are intended to be within the scope of the claims associated with this disclosure.
The invention includes methods that may be performed using the subject devices. The method may include the act of providing such a suitable device. Such provisioning may be performed by the end user. In other words, the act of "providing" merely requires the end user to obtain, access, approach, locate, set, activate, turn on, or otherwise provide the necessary means in the method. The methods described herein may be performed in any order of the described events that is logically possible, as well as in the order of the events that are described.
Exemplary aspects of the invention and details regarding material selection and fabrication have been set forth above. Additional details regarding the present invention can be found in conjunction with the above-referenced patents and publications and as generally known or understood by those skilled in the art. Aspects regarding the underlying method according to the invention may also hold with respect to additional actions that are commonly or logically utilized.
In addition, while the invention has been described with reference to several examples that optionally incorporate various features, the invention is not limited to the described or indicated invention as contemplated for each variation of the invention. Various changes may be made to the invention described and equivalents may be substituted (whether included or not for the sake of brevity) without departing from the true spirit and scope of the invention. Further, where a range of values is provided, it is understood that each intervening value, to the extent that there is no such stated or intervening value, to the upper and lower limit of that range, and any other stated or intervening value in that stated range, is encompassed within the invention.
Additionally, it is contemplated that any optional feature of the described variations of the invention may be set forth and claimed independently or in combination with any one or more of the features described herein. Reference to a singular item includes a plurality of items that may be present in the same item. More specifically, as used herein and in the associated claims, the singular forms "a," "an," "the," and "the" include plural referents unless the context clearly dictates otherwise. In other words, the use of the article "at least one" target item is permitted in the foregoing description as well as in the claims associated with this disclosure. It is further noted that such claims may be drafted to exclude any optional element. Thus, the statement herein is intended to serve as antecedent basis for use of such exclusive terminology as "solely," "only," and the like in connection with the claim element or use of a "negative" limitation.
The term "comprising" in the claims associated with this disclosure should be allowed to include any additional elements without using such exclusive terminology, regardless of whether a given number of elements or added features are recited in such claims, may be considered to change the nature of the elements recited in the claims. Except as specifically defined herein, all technical and scientific terms used herein are to be given as broad a commonly understood meaning as possible while maintaining claim validity.
The breadth of the present invention is not limited by the examples and/or subject specification provided, but is only limited by the scope of the claim language associated with this disclosure.

Claims (11)

1. An Augmented Reality (AR) display system comprising:
a hand-held controller comprising an electromagnetic field transmitter for transmitting a known magnetic field in a known coordinate system;
an electromagnetic sensor for measuring a parameter related to a magnetic flux at the electromagnetic sensor resulting from the known magnetic field;
a depth sensor for measuring a distance between the depth sensor and the electromagnetic field emitter in the known coordinate system;
an inertial measurement unit component configured to facilitate determination of a position and/or orientation of the electromagnetic field transmitter relative to other components;
a controller for determining pose information of the electromagnetic sensor relative to the electromagnetic field emitter in the known coordinate system based at least in part on the parameter related to magnetic flux measured by the electromagnetic sensor and the distance measured by the depth sensor; and
a display system for displaying virtual content to a user based at least in part on the pose information of the electromagnetic sensor relative to the electromagnetic field emitter,
wherein the electromagnetic field emitter and the electromagnetic sensor are both movable.
2. The AR display system of claim 1, further comprising a world capture camera and a picture camera,
wherein the depth sensor comprises a depth camera having a first field of view FOV,
wherein the world capture camera has a second field of view FOV that at least partially overlaps the first field of view FOV,
wherein the picture camera has a third field of view FOV that at least partially overlaps the first and second field of view FOVs, and
wherein the depth camera, the world capture camera, and the picture camera are configured to capture respective first, second, and third images.
3. The AR display system of claim 2, wherein the controller is programmed to segment the second image and the third image.
4. The AR display system of claim 3, wherein the controller is programmed to fuse the second image and the third image after segmenting the second image and the third image to produce a fused image.
5. The AR display system of claim 1, further comprising an additional positioning resource to provide additional information, wherein the pose information of the electromagnetic sensor relative to the electromagnetic field emitter in the known coordinate system is determined based at least in part on the parameter related to magnetic flux measured by the electromagnetic sensor, the distance measured by the depth sensor, and the additional information provided by the additional positioning resource.
6. The AR display system of claim 1, wherein the electromagnetic field emitter is coupled to a movable component of the AR display system.
7. The AR display system of claim 1, wherein the electromagnetic sensor is disposed at a head gear, a belt pack, or the hand-held controller, and
wherein the depth sensor is disposed in the head set, the belt pack, or the hand-held controller.
8. The AR display system of claim 7, wherein the hand-held controller comprises a totem, a haptic device, or a gaming tool.
9. An augmented reality display system comprising:
a handheld component comprising an electromagnetic field emitter that emits a magnetic field;
a head-mounted component having a display system that displays virtual content to a user, the head-mounted component coupled to an electromagnetic sensor that measures a parameter related to a magnetic flux generated from the magnetic field at the electromagnetic sensor, wherein a head pose of the head-mounted component is known in a known coordinate system;
a depth sensor that measures a distance between the depth sensor and the electromagnetic field emitter in the known coordinate system;
an inertial measurement unit component configured to facilitate determination of a position and/or orientation of the electromagnetic field transmitter relative to other components; and
a controller communicatively coupled to the handheld component, the headset component, and the depth sensor, the controller receiving the parameter related to magnetic flux at the electromagnetic sensor from the headset component and the distance from the depth sensor,
wherein the controller determines a hand pose of the hand-held component based at least in part on the parameter related to magnetic flux measured by the electromagnetic sensor and the distance measured by the depth sensor,
wherein the system modifies the virtual content displayed to the user based at least in part on the hand gesture, and
wherein the electromagnetic field emitter and the electromagnetic sensor are both movable.
10. The augmented reality display system of claim 9, wherein the electromagnetic sensor is disposed in the head-mounted component, waist pack, or the hand-held component, and
wherein the depth sensor is disposed in the head-worn component, the belt pack, or the hand-held component.
11. The augmented reality display system of claim 10, wherein the hand-held component comprises a totem, a haptic device, or a game tool.
CN201780010073.0A 2016-02-05 2017-02-06 System and method for augmented reality Active CN108700939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210650785.1A CN114995647A (en) 2016-02-05 2017-02-06 System and method for augmented reality

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US201662292185P 2016-02-05 2016-02-05
US62/292,185 2016-02-05
US201662298993P 2016-02-23 2016-02-23
US62/298,993 2016-02-23
US15/062,104 US20160259404A1 (en) 2015-03-05 2016-03-05 Systems and methods for augmented reality
US15/062,104 2016-03-05
PCT/US2017/016722 WO2017136833A1 (en) 2016-02-05 2017-02-06 Systems and methods for augmented reality

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210650785.1A Division CN114995647A (en) 2016-02-05 2017-02-06 System and method for augmented reality

Publications (2)

Publication Number Publication Date
CN108700939A CN108700939A (en) 2018-10-23
CN108700939B true CN108700939B (en) 2022-07-05

Family

ID=59501080

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201780010073.0A Active CN108700939B (en) 2016-02-05 2017-02-06 System and method for augmented reality
CN202210650785.1A Pending CN114995647A (en) 2016-02-05 2017-02-06 System and method for augmented reality

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210650785.1A Pending CN114995647A (en) 2016-02-05 2017-02-06 System and method for augmented reality

Country Status (8)

Country Link
EP (1) EP3411779A4 (en)
JP (2) JP2019505926A (en)
KR (1) KR20180110051A (en)
CN (2) CN108700939B (en)
AU (1) AU2017214748B9 (en)
CA (1) CA3011377C (en)
IL (3) IL301449B1 (en)
WO (1) WO2017136833A1 (en)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3265866B1 (en) 2015-03-05 2022-12-28 Magic Leap, Inc. Systems and methods for augmented reality
US10180734B2 (en) 2015-03-05 2019-01-15 Magic Leap, Inc. Systems and methods for augmented reality
US10838207B2 (en) 2015-03-05 2020-11-17 Magic Leap, Inc. Systems and methods for augmented reality
CA3007367A1 (en) 2015-12-04 2017-06-08 Magic Leap, Inc. Relocalization systems and methods
IL294134B2 (en) 2016-08-02 2023-10-01 Magic Leap Inc Fixed-distance virtual and augmented reality systems and methods
US10812936B2 (en) 2017-01-23 2020-10-20 Magic Leap, Inc. Localization determination for mixed reality systems
KR20230149347A (en) 2017-03-17 2023-10-26 매직 립, 인코포레이티드 Mixed reality system with color virtual content warping and method of generating virtual content using same
EP3596703A1 (en) 2017-03-17 2020-01-22 Magic Leap, Inc. Mixed reality system with virtual content warping and method of generating virtual content using same
CA3054617A1 (en) 2017-03-17 2018-09-20 Magic Leap, Inc. Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same
CN107807738B (en) * 2017-12-04 2023-08-15 成都思悟革科技有限公司 Head motion capturing system and method for VR display glasses
US10558260B2 (en) 2017-12-15 2020-02-11 Microsoft Technology Licensing, Llc Detecting the pose of an out-of-range controller
CN108269310A (en) * 2018-03-20 2018-07-10 公安部上海消防研究所 A kind of interactive exhibition system, method and device
WO2020023383A1 (en) 2018-07-23 2020-01-30 Magic Leap, Inc. Mixed reality system with virtual content warping and method of generating virtual content using same
EP3827584A4 (en) 2018-07-23 2021-09-08 Magic Leap, Inc. Intra-field sub code timing in field sequential displays
US11227435B2 (en) 2018-08-13 2022-01-18 Magic Leap, Inc. Cross reality system
JP7445642B2 (en) * 2018-08-13 2024-03-07 マジック リープ, インコーポレイテッド cross reality system
EP4286894A3 (en) 2018-09-05 2024-01-24 Magic Leap, Inc. Directed emitter/sensor for electromagnetic tracking in augmented reality systems
US11666203B2 (en) * 2018-10-04 2023-06-06 Biosense Webster (Israel) Ltd. Using a camera with an ENT tool
JP2022512600A (en) 2018-10-05 2022-02-07 マジック リープ, インコーポレイテッド Rendering location-specific virtual content anywhere
US11679014B2 (en) 2018-10-30 2023-06-20 Boston Scientific Scimed, Inc. Devices and methods for treatment of body lumens
US11353588B2 (en) * 2018-11-01 2022-06-07 Waymo Llc Time-of-flight sensor with structured light illuminator
US11347310B2 (en) * 2019-02-21 2022-05-31 Facebook Technologies, Llc Tracking positions of portions of a device based on detection of magnetic fields by magnetic field sensors having predetermined positions
DE102020110212A1 (en) * 2019-04-16 2020-10-22 Ascension Technology Corporation Position and orientation determination with a Helmholtz device
CN110223686A (en) * 2019-05-31 2019-09-10 联想(北京)有限公司 Audio recognition method, speech recognition equipment and electronic equipment
JP2022551735A (en) 2019-10-15 2022-12-13 マジック リープ, インコーポレイテッド Cross-reality system using wireless fingerprints
US11386627B2 (en) 2019-11-12 2022-07-12 Magic Leap, Inc. Cross reality system with localization service and shared location-based content
EP4073763A4 (en) 2019-12-09 2023-12-27 Magic Leap, Inc. Cross reality system with simplified programming of virtual content
CN115427758A (en) 2020-02-13 2022-12-02 奇跃公司 Cross reality system with accurate shared map
WO2021163295A1 (en) 2020-02-13 2021-08-19 Magic Leap, Inc. Cross reality system with prioritization of geolocation information for localization
CN115398314A (en) 2020-02-13 2022-11-25 奇跃公司 Cross reality system for map processing using multi-resolution frame descriptors
CN111652261A (en) * 2020-02-26 2020-09-11 南开大学 Multi-modal perception fusion system
CN113539039B (en) * 2021-09-01 2023-01-31 山东柏新医疗制品有限公司 Training device and method for drug application operation of male dilatation catheter
CN114882773A (en) * 2022-05-24 2022-08-09 华北电力大学(保定) Magnetic field learning system based on Augmented Reality
KR20240047186A (en) * 2022-10-04 2024-04-12 삼성전자주식회사 Augmented reality apparatus and operating method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2142338C (en) * 1992-08-14 1999-11-30 John Stuart Bladen Position location system
CN101530325A (en) * 2008-02-29 2009-09-16 韦伯斯特生物官能公司 Location system with virtual touch screen
EP2887311A1 (en) * 2013-12-20 2015-06-24 Thomson Licensing Method and apparatus for performing depth estimation
CN205007551U (en) * 2015-08-19 2016-02-03 深圳游视虚拟现实技术有限公司 Human -computer interaction system based on virtual reality technology

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2358682A1 (en) * 1992-08-14 1994-03-03 British Telecommunications Public Limited Company Position location system
JP2001208529A (en) 2000-01-26 2001-08-03 Mixed Reality Systems Laboratory Inc Measuring apparatus, control method thereof and memory medium
US20010056574A1 (en) 2000-06-26 2001-12-27 Richards Angus Duncan VTV system
US20070155589A1 (en) * 2002-12-04 2007-07-05 Philip Feldman Method and Apparatus for Operatively Controlling a Virtual Reality Scenario with an Isometric Exercise System
JP2007134785A (en) 2005-11-08 2007-05-31 Konica Minolta Photo Imaging Inc Head mounted video display apparatus
IL195389A (en) * 2008-11-19 2013-12-31 Elbit Systems Ltd System and method for mapping a magnetic field
KR20090055803A (en) * 2007-11-29 2009-06-03 광주과학기술원 Method and apparatus for generating multi-viewpoint depth map, method for generating disparity of multi-viewpoint image
US8405680B1 (en) * 2010-04-19 2013-03-26 YDreams S.A., A Public Limited Liability Company Various methods and apparatuses for achieving augmented reality
JP6202981B2 (en) 2013-10-18 2017-09-27 任天堂株式会社 Information processing program, information processing apparatus, information processing system, and information processing method
US10203762B2 (en) * 2014-03-11 2019-02-12 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US20150358539A1 (en) * 2014-06-06 2015-12-10 Jacob Catt Mobile Virtual Reality Camera, Method, And System

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2142338C (en) * 1992-08-14 1999-11-30 John Stuart Bladen Position location system
CN101530325A (en) * 2008-02-29 2009-09-16 韦伯斯特生物官能公司 Location system with virtual touch screen
EP2887311A1 (en) * 2013-12-20 2015-06-24 Thomson Licensing Method and apparatus for performing depth estimation
CN205007551U (en) * 2015-08-19 2016-02-03 深圳游视虚拟现实技术有限公司 Human -computer interaction system based on virtual reality technology

Also Published As

Publication number Publication date
AU2017214748A1 (en) 2018-08-02
JP2022002144A (en) 2022-01-06
CA3011377A1 (en) 2017-08-10
JP7297028B2 (en) 2023-06-23
CN114995647A (en) 2022-09-02
CN108700939A (en) 2018-10-23
IL301449A (en) 2023-05-01
JP2019505926A (en) 2019-02-28
EP3411779A4 (en) 2019-02-20
IL260614A (en) 2018-08-30
KR20180110051A (en) 2018-10-08
IL293782A (en) 2022-08-01
AU2017214748B9 (en) 2021-05-27
CA3011377C (en) 2024-05-14
EP3411779A1 (en) 2018-12-12
IL301449B1 (en) 2024-02-01
AU2017214748B2 (en) 2021-05-06
IL293782B1 (en) 2023-04-01
IL293782B2 (en) 2023-08-01
WO2017136833A1 (en) 2017-08-10

Similar Documents

Publication Publication Date Title
CN108700939B (en) System and method for augmented reality
US10678324B2 (en) Systems and methods for augmented reality
US11619988B2 (en) Systems and methods for augmented reality
US11531072B2 (en) Calibration of magnetic and optical sensors in a virtual reality or augmented reality display system
US11256090B2 (en) Systems and methods for augmented reality
NZ735802A (en) Traffic diversion signalling system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant