CN108700939A - System and method for augmented reality - Google Patents
System and method for augmented reality Download PDFInfo
- Publication number
- CN108700939A CN108700939A CN201780010073.0A CN201780010073A CN108700939A CN 108700939 A CN108700939 A CN 108700939A CN 201780010073 A CN201780010073 A CN 201780010073A CN 108700939 A CN108700939 A CN 108700939A
- Authority
- CN
- China
- Prior art keywords
- display systems
- camera
- depth
- depth transducer
- resource includes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 23
- 238000000034 method Methods 0.000 title claims description 91
- 230000005672 electromagnetic field Effects 0.000 claims abstract description 80
- 230000004907 flux Effects 0.000 claims abstract description 27
- 230000005855 radiation Effects 0.000 claims description 24
- 238000005259 measurement Methods 0.000 claims description 16
- 238000001514 detection method Methods 0.000 claims description 15
- 230000000007 visual effect Effects 0.000 claims description 14
- 238000002156 mixing Methods 0.000 claims description 12
- 238000013507 mapping Methods 0.000 claims description 11
- 230000005611 electricity Effects 0.000 claims description 9
- 239000003550 marker Substances 0.000 claims description 9
- 230000001413 cellular effect Effects 0.000 claims description 8
- 241000208340 Araliaceae Species 0.000 claims 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims 1
- 235000003140 Panax quinquefolius Nutrition 0.000 claims 1
- 235000008434 ginseng Nutrition 0.000 claims 1
- 230000011218 segmentation Effects 0.000 claims 1
- 210000003128 head Anatomy 0.000 description 45
- 230000036544 posture Effects 0.000 description 29
- 238000012545 processing Methods 0.000 description 22
- 239000007787 solid Substances 0.000 description 13
- 230000008878 coupling Effects 0.000 description 10
- 238000010168 coupling process Methods 0.000 description 10
- 238000005859 coupling reaction Methods 0.000 description 10
- 230000004886 head movement Effects 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000006399 behavior Effects 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 229910000859 α-Fe Inorganic materials 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000002708 enhancing effect Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 230000004438 eyesight Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 230000007474 system interaction Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000012888 cubic function Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000016776 visual perception Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 241000258241 Mantis Species 0.000 description 1
- 241000256856 Vespidae Species 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 210000001217 buttock Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005266 casting Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000000994 depressogenic effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000005684 electric field Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 201000003152 motion sickness Diseases 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000007493 shaping process Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 208000011580 syndromic disease Diseases 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001720 vestibular Effects 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/23—Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console
- A63F13/235—Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console using a wireless connection, e.g. infrared or piconet
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/211—Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/25—Output arrangements for video game devices
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/35—Details of game servers
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
- A63F13/525—Changing parameters of virtual cameras
- A63F13/5255—Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Abstract
A kind of augmented reality display system includes electromagnetic field transmitter, is used to emit known magnetic field in known coordinate system.The system further includes electromagnetic sensor, the relevant parameter of the magnetic flux at electromagnetic sensor for being used to measure with generate from known magnetic field.The system further comprises depth transducer, is used to measure the distance in known coordinate system.In addition, the system includes controller, be used to be based at least partially on measured by electromagnetic sensor measured with the relevant parameter of magnetic flux and by depth transducer at a distance from, determine pose information of the electromagnetic sensor relative to electromagnetic field transmitter in known coordinate system.In addition, the system includes display system, it is used to be based at least partially on pose information of the electromagnetic sensor relative to electromagnetic field transmitter, virtual content is shown to user.
Description
Cross reference to related applications
This application claims the U.S. Provisional Patent Application of 2 months Serial No. 62/292,185 submitted for 5th in 2016 and
The equity of the priority of the U.S. Provisional Patent Application of 2 months Serial No. 62/298,993 submitted for 23rd in 2016.The application
It is the part continuation application of the U.S. Patent application for the Serial No. 15/062,104 submitted on March 5th, 2016, the United States Patent (USP)
Application requires the U.S. Provisional Patent Application of Serial No. 62/128,993 submitted on March 5th, 2015 and on 2 5th, 2016
The equity of the priority of the U.S. Provisional Patent Application of the Serial No. 62/292,185 of submission.The application also with March 1 in 2016
The U.S. Provisional Patent Application for the Serial No. 62/301,847 that day submits is related.The full content of above-mentioned application is by quoting simultaneously
Enter herein.
Technical field
This disclosure relates to be positioned to the position and orientation of one or more objects under the background of augmented reality system
System and method.
Background technology
Modern computing and display technology promote the system experienced for so-called " virtual reality " or " augmented reality "
Development, wherein image of digital reproduction or part thereof by its look like it is true or can be perceived as it is true in a manner of present
To user.Virtual reality (or " VR ") scene is usually directed to the presentation of number or virtual image information, and for other reality
Real world vision input it is opaque;Augmented reality (or " AR ") scene is usually directed to digital or virtual image information
It presents as the visual enhancing to the real world around user.
For example, with reference to figure 1, augmented reality scene (4) is depicted, the user of wherein AR technologies sees with people, trees, background
In building and concrete platform (1120) be characterized real-world park shape setting (6).Other than these projects, AR
The user of technology equally perceives his " seeing " and stands the robot statue (1110) on real world platform (1120), Yi Jifei
The cartoon type incarnation role (2) crossed, incarnation role look like the incarnation of hornet, even if these elements (2,1110) are true
It is not present in the real world.Facts proved that the visual perception system of the mankind is extremely complex, and generation contributes to together with other
Comfortable, natural, abundant present of the virtual image element of virtual or real world pictorial element together is challenging.
For example, wear-type AR displays (or helmet-mounted display or intelligent glasses) are typically at least loosely couple to user
Head, therefore moved when the head of user is moved.If display system detects the head movement of user, can update
The data just shown, to consider the variation of head pose.
As an example, if the user for wearing head-mounted display watches the Virtual table of three-dimensional (3D) object over the display
Show and walk about around the region that 3D objects occur, then the 3D objects can render each viewpoint again, to use
His or she feeling for walking about in the data collection for occupying real space of family.If head-mounted display is used in Virtual Space
Multiple objects, the then measurement of head pose (that is, the position of user's head and orientation) are presented in (for example, abundant virtual world)
Can be used for again render scenes with match the head position of the dynamic change of user and be orientated and provide enhancing virtual empty
Between in feeling of immersion.
In AR systems, detecting or calculating for head pose can be in order to which display system renders virtual objects so that they
Seem to occupy space with the real world in a manner of significant to user.In addition, pair related to user's head or AR systems
Real object (such as handheld device (being referred to as " totem (totem) "), haptic apparatus or other actual physical objects)
Position and/or orientation detection detection can also be convenient for display system to user present show information when allow users to
Effect ground is interacted with the particular aspects of AR systems.When the head of user is moved in real world, virtual objects can be according to head
Portion's posture renders again so that virtual objects seem to keep stablizing relative to real world.At least AR is applied, it is virtual right
As spatially relative to the placement of physical object (for example, being rendered as in two dimension or three-dimensional spatially close to physical object)
The problem of may be non-trivial.For example, head movement may make placement of the virtual objects in environmental views (view) aobvious
It writes and complicates.No matter view is captured as the image of ambient enviroment, is then projected or is shown to terminal user, or eventually
End subscriber directly perceives the view of ambient enviroment, is not always the case.For example, head movement may cause the visual field of terminal user to change
Become, this may need the location updating shown in the visual field of terminal user to various virtual objects.In addition, head movement can be with
Occur in a variety of ranges and speed.Head movement speed not only changes between different head movements, but also in single head
Range in the range of portion's movement or across the movement of single head.For example, head movement speed can initially increase from starting point
Add (for example, linearly or non-linearly increase), and can reduce when reaching terminal, to the starting point of head movement with
Somewhere between terminal obtains maximum speed.Quick head movement possibly even beyond being particularly shown or the ability of shadow casting technique,
Seem uniform and/or smooth motion image to be presented to terminal user.
Head tracking accuracy and delay (latency) (are updated simultaneously that is, moving his or her head from user to image
It is shown between user and passes through the time) it is a challenge for VR and AR systems.It is filled out especially for virtual element
The most display system in the user visual field is filled, the accuracy of head tracking is high and is detected from the first time to head movement
The newer total system that the light of user's vision system is passed to by display postpones very low to be vital.If delay
Very high, then system can generate mismatch between the vestibular and visual perception's system of user, and generation can lead to motion sickness or mould
The user of quasi- device syndrome perceives scene.If system delay is very high, the above the fold of virtual objects will be transported on quick head
Dynamic period seems unstable.
Other than wear-type display system, other display systems may also benefit from accurate and low latency head pose
Detection.These include head tracking display system, and wherein display is not worn on the body of user, but for example mounted on
On wall or other surfaces.Head tracking display works as user and moves him relative to " window " just as the window in scene
Head when, scene is rendered again to match the variation viewpoint of user.Other systems include wear-type optical projection system, wherein head
Head mounted displays project light onto in real world.
In addition, in order to provide augmented reality experience true to nature, AR systems can be designed as interacting with user.For example, more
A user can use virtual ball and/or other virtual objects to carry out ball match.One user " can catch " virtual ball, and should
Ball is thrown back to another user.In another embodiment, totem can be provided to the first user (for example, being communicatively coupled to AR systems
The true bat of system) to hit virtual ball.In other embodiments, visual user interface can be presented to AR user to allow to use
Family selects one of many options.User can use totem, haptic apparatus, wearable component, or simply touch virtual screen
With with system interaction.
The physical location for detecting the real object in the head pose and orientation of user, and detection space makes AR systems
Virtual content can be shown in a manner of effective, pleasant.However, although these abilities are the key that AR systems, it is difficult to realize.
In other words, AR systems must identify real object (for example, the head of user, totem, haptic apparatus, wearable component, user
Hand etc.) physical location, and by the physical coordinates of real object with corresponding to positive user show it is one or more virtually
The virtual coordinates of object are associated.This needs the biography of the position of the one or more objects of quickly tracking and the pin-point accuracy of orientation
Sensor and sensor identifying system.Current method cannot execute positioning with satisfactory speed or accuracy standard.
Accordingly, there exist the preferably demands of positioning system under the background of AR and VR equipment.
Invention content
The embodiment of the present invention be related to for convenient for one or more users virtual reality and/or augmented reality interaction
Equipment, system and method.
In one embodiment, a kind of augmented reality (AR) display system includes electromagnetic field transmitter, is used for known
Emit known magnetic field in coordinate system.The system also includes electromagnetic sensor, it is used to measure and is generated from the known magnetic field
The relevant parameter of the magnetic flux at the electromagnetic sensor.The system further comprises depth transducer, is used to survey
Measure the distance in the known coordinate system.In addition, the system comprises controller, it is used to be based at least partially on by the electricity
Magnetic Sensor measure with the distance magnetic flux the relevant parameter and measured by the depth transducer, determine
Pose information of the electromagnetic sensor relative to the electromagnetic field transmitter described in the known coordinate system.In addition, the system
System includes display system, is used to be based at least partially on the electromagnetic sensor relative to described in the electromagnetic field transmitter
Pose information shows virtual content to user.
In one or more embodiments, the depth transducer is passive type stereo depth sensor.
In one or more embodiments, the depth transducer is active depth transducer.The depth transducer
It can be texture project stereoscopic depth transducer, structured light project stereoscopic depth transducer, flight time depth transducer, swash
Optical radar (LIDAR) depth transducer or modulate emission depth transducer.
In one or more embodiments, the depth transducer includes the depth camera with the first visual field (FOV).Institute
Stating AR display systems can also include that the world captures camera, have at least partly with described the wherein the world captures camera
2nd FOV of one FOV overlappings.The AR display systems can also include picture camera, wherein the picture camera has at least
Partly threeth FOV Chong Die with the first FOV and the 2nd FOV.The depth camera, the world capture camera and
The picture camera can have corresponding different first resolution, second resolution and third resolution ratio.The depth camera
The first resolution can be Asia VGA (sub-VGA), the second resolution of world capture camera can be
The third resolution ratio of 720p, the picture camera can be 2 mega pixels.
In one or more embodiments, the depth camera, the world capture camera and the picture camera by with
It is set to and captures corresponding first image, the second image and third image.The controller can be programmed to divide described second
Image and the third image.The controller can be programmed to after dividing second image and the third image
Second image and the third image are merged to generate blending image.The distance measured in the known coordinate system can wrap
It includes and generates imaginary distance by analyzing the described first image from the depth camera;And by analyze it is described imagination away from
It is generated with a distance from described from the blending image.The depth camera, the world capture camera and the picture camera shape
At single integrated sensor.
In one or more embodiments, the AR display systems further include additional locating resource to provide additional information.
Can be based at least partially on measured by the electromagnetic sensor with the relevant parameter of the magnetic flux, by the depth sensing
The distance that device measures and the additional information provided by the additional locating resource, determine in the known coordinate system
Described in the pose information of the electromagnetic sensor relative to the electromagnetic field transmitter.
In one or more embodiments, the additional locating resource may include WiFi transceiver, additional electromagnetic transmitting
Device or additional electromagnetic sensor.The additional locating resource may include beacon.The beacon can emit radiation.The radiation
Can be infra-red radiation, and the beacon may include infrared LED.The additional locating resource may include reflector.Institute
Stating reflector can be with reflected radiation.
In one or more embodiments, the additional locating resource may include cellular network transceiver, radar emission
Device, radar detector, laser radar transmitter, lidar detectors, GPS transceiver, the sea with known detectable pattern
Report (poster), marker (marker), Inertial Measurement Unit or deformeter with known detectable pattern.
In one or more embodiments, the electromagnetic field transmitter is coupled to the movable part of the AR display systems
Part.The movable part can be handheld unit, totem, the wear-type component for accommodating the display system, trunk wearing
Formula component or purse (belt-pack).
In one or more embodiments, the electromagnetic field transmitter is coupled to the object in the known coordinate system,
So that the electromagnetic field transmitter has known location and known orientation.It is aobvious that the electromagnetic sensor can be coupled to the AR
Show the movable part of system.The movable part can be handheld unit, totem, accommodate wearing for the display system
The wearable component of formula component, trunk or purse.
In one or more embodiments, the pose information is included in electromagnetic sensor described in the known coordinate system
Position relative to the electromagnetic field transmitter and orientation.The controller can analyze the pose information with the determination electricity
Position and orientation of the Magnetic Sensor in the known coordinate system.
In another embodiment, a kind of method for showing augmented reality includes using electromagnetic field transmitter in known seat
Emit known magnetic field in mark system.The method further includes being measured and being generated from the known magnetic field in institute using electromagnetic sensor
State the relevant parameter of magnetic flux at electromagnetic sensor.It is described known the method further includes using depth transducer to measure
Distance in coordinate system.In addition, the method includes be based at least partially on measured using the electromagnetic sensor with it is described
Distance magnetic flux the relevant parameter and measured using the depth transducer is determined in the known coordinate system
Pose information of the electromagnetic sensor relative to the electromagnetic field transmitter.In addition, the method includes at least partly grounds
In the pose information of the electromagnetic sensor relative to the electromagnetic field transmitter, virtual content is shown to user.
In one or more embodiments, the depth transducer is passive type stereo depth sensor.
In one or more embodiments, the depth transducer is active depth transducer.The depth transducer
It can be texture project stereoscopic depth transducer, structured light project stereoscopic depth transducer, flight time depth transducer, swash
Optical radar depth transducer or modulate emission depth transducer.
In one or more embodiments, the depth transducer includes the depth camera with the first visual field (FOV).Institute
Stating depth transducer can also include that the world captures camera, have at least partly with described the wherein the world captures camera
2nd FOV of one FOV overlappings.The depth transducer can also include picture camera, wherein the picture camera has at least
Partly threeth FOV Chong Die with the first FOV and the 2nd FOV.The depth camera, the world capture camera and
The picture camera can have corresponding different first resolution, second resolution and third resolution ratio.The depth camera
The first resolution can be Asia VGA, the second resolution that the world captures camera can be 720p, the figure
The third resolution ratio of piece camera can be 2 mega pixels.
In one or more embodiments, the method further include using corresponding depth camera, the world capture camera and
Picture camera captures the first image, the second image and third image.The method can also include divide second image and
The third image.The method may further include merges institute after dividing second image and the third image
The second image and the third image are stated to generate blending image.The distance measured in the known coordinate system may include passing through
It analyzes the described first image from the depth camera and generates imaginary distance;And by analyzing the imaginary distance and institute
It states blending image and generates the distance.The depth camera, the world capture camera and the picture camera and can be formed
Single integrated sensor.
In one or more embodiments, the method further includes being based at least partially on to survey using the electromagnetic sensor
The distance of amount measured with the relevant parameter of the magnetic flux, using the depth transducer and by additional positioning
The additional information that resource provides determines the electromagnetic sensor described in the known coordinate system relative to the electromagnetic field transmitter
The pose information.
In one or more embodiments, the additional locating resource may include WiFi transceiver, additional electromagnetic transmitting
Device or additional electromagnetic sensor.The additional locating resource may include beacon.The method can also include transmitting radiation
Beacon.The radiation can be infra-red radiation, and the beacon may include infrared LED.The additional locating resource can be with
Including reflector.The method can also include the reflector of reflected radiation.
In one or more embodiments, the additional locating resource may include cellular network transceiver, radar emission
Device, radar detector, laser radar transmitter, lidar detectors, GPS transceiver, the sea with known detectable pattern
Report, marker, Inertial Measurement Unit or deformeter with known detectable pattern.
In one or more embodiments, the electromagnetic field transmitter is coupled to the movable part of AR display systems.
The movable part can be handheld unit, totem, the wearable portion of wear-type component, trunk for accommodating the display system
Part or purse.
In one or more embodiments, the electromagnetic field transmitter is coupled to the object in the known coordinate system,
So that the electromagnetic field transmitter has known location and known orientation.The electromagnetic sensor can be coupled to AR displays system
The movable part of system.The movable part can be handheld unit, totem, the wear-type portion for accommodating the display system
The wearable component of part, trunk or purse.
In one or more embodiments, the pose information is included in electromagnetic sensor described in the known coordinate system
Position relative to the electromagnetic field transmitter and orientation.The method can also include analyzing the pose information to determine
State position and orientation of the electromagnetic sensor in the known coordinate system.
In another embodiment, a kind of augmented reality display system includes handheld unit, is coupled to electromagnetic field hair
Emitter, the electromagnetic field transmitter emit magnetic field.The system also includes wear-type components, have to user and show in virtual
The display system of appearance.The wear-type component is coupled to electromagnetic sensor, the electromagnetic sensor measure with from the magnetic field
The relevant parameter of the magnetic flux at the electromagnetic sensor generated, wherein the wear-type component is in known coordinate system
Head pose is known.The system further comprises depth transducer, measures the distance in the known coordinate system, separately
Outside, the system comprises controllers, are communicatedly coupled to the handheld unit, the wear-type component and the depth
Sensor.The controller receives and the relevant parameter of magnetic flux from the electromagnetic sensor from the wear-type component
And receive the distance from the depth transducer.The controller, which is based at least partially on, to be measured by the electromagnetic sensor
The distance with magnetic flux the relevant parameter and being measured by the depth transducer, determines the hand of the handheld unit
Portion's posture.The system is based at least partially on the hand gesture and changes the virtual content shown to the user.
In one or more embodiments, the depth transducer is passive type stereo depth sensor.
In one or more embodiments, the depth transducer is active depth transducer.The depth transducer
It can be texture project stereoscopic depth transducer, structured light project stereoscopic depth transducer, flight time depth transducer, swash
Optical radar depth transducer or modulate emission depth transducer.
In one or more embodiments, the depth transducer includes the depth camera with the first visual field (FOV).Institute
Stating AR display systems can also include that the world captures camera, have at least partly with described the wherein the world captures camera
2nd FOV of one FOV overlappings.The AR display systems can also include picture camera, wherein the picture camera has at least
Partly threeth FOV Chong Die with the first FOV and the 2nd FOV.The depth camera, the world capture camera and
The picture camera can have corresponding different first resolution, second resolution and third resolution ratio.The depth camera
The first resolution can be Asia VGA, the second resolution that the world captures camera can be 720p, the figure
The third resolution ratio of piece camera can be 2 mega pixels.
In one or more embodiments, the depth camera, the world capture camera and the picture camera by with
It is set to and captures corresponding first image, the second image and third image.The controller can be programmed to divide described second
Image and the third image.The controller can be programmed to after dividing second image and the third image
Second image and the third image are merged to generate blending image.The distance measured in the known coordinate system can wrap
Include and generate imaginary distance by analyzing described first image from the depth camera, and by analyze the imagination away from
It is generated with a distance from described from the blending image.The depth camera, the world capture camera and the picture camera can
To form single integrated sensor.
In one or more embodiments, the AR display systems further include additional locating resource to provide additional information.
The controller be based at least partially on measured by the electromagnetic sensor with the relevant parameter of magnetic flux, by the depth
The additional information spent the distance of sensor measurement and provided by the additional locating resource, determines the hand-held
The hand gesture of component.
In one or more embodiments, the additional locating resource may include WiFi transceiver, additional electromagnetic transmitting
Device or additional electromagnetic sensor.The additional locating resource may include beacon.The beacon can emit radiation.The radiation
Can be infra-red radiation, and the beacon may include infrared LED.The additional locating resource may include reflector.Institute
Stating reflector can be with reflected radiation.
In one or more embodiments, the additional locating resource may include cellular network transceiver, radar emission
Device, radar detector, laser radar transmitter, lidar detectors, GPS transceiver, the sea with known detectable pattern
Report, marker, Inertial Measurement Unit or deformeter with known detectable pattern.
In one or more embodiments, the electromagnetic field handheld unit is totem.The hand gesture information can be with
Including position of the handheld unit in the known coordinate system and orientation.
Additional or other objects, features and advantages of the present invention are retouched in specific implementation mode, drawings and claims
It states.
Description of the drawings
Attached drawing illustrates the design and effectiveness of various embodiments of the present invention.It should be noted that attached drawing is not necessarily drawn to scale, and
And the element of similar structure or function is denoted by the same reference numerals in all the appended drawings.How to obtain in order to better understand
The above and other advantages and purpose for obtaining various embodiments of the present invention, will be by reference to exemplary specific embodiment in the accompanying drawings
The more detailed description for the present invention being briefly described above is presented.It should be appreciated that these attached drawings depict only the allusion quotation of the present invention
Type embodiment, therefore be not considered as limitation of the scope of the invention, the present invention will be by using attached drawings with additional feature and thin
It saves to describe and explain, in the accompanying drawings:
Fig. 1 examples go out the plan view of the AR scenes shown to the user of AR systems according to one embodiment.
Fig. 2A goes out the various embodiments of wearable AR equipment to 2D examples
Fig. 3 examples go out the exemplary implementation of the wearable AR equipment interacted with one or more Cloud Servers of AR systems
Example.
Fig. 4 examples go out the exemplary embodiment of electromagnetic tracking system.
Fig. 5 examples go out to determine the illustrative methods of the position and orientation of sensor according to an exemplary embodiment.
Fig. 6 examples provide the exemplary embodiment of the AR systems of electromagnetic tracking system.
Fig. 7 examples go out to transmit the illustrative methods of virtual content to user based on the head pose detected.
Fig. 8 examples go out the various parts of the AR systems with electromagnetic launcher and electromagnetic sensor according to one embodiment
Schematic diagram.
Fig. 9 A go out the various embodiments of control and quick release module to 9F examples.
Figure 10 examples go out a simplification embodiment of wearable AR equipment.
Figure 11 A and 11B examples go out the various embodiments of placement of the electromagnetic sensor in wear-type AR systems.
Figure 12 A go out the cubical various embodiments of ferrite of electromagnetic sensor to be couple to 12E examples.
Figure 13 A go out the various embodiments of the data processor for electromagnetic sensor to 13C examples.
Figure 14 examples go out the illustrative methods using electromagnetic tracking system detector portion and hand gesture.
Figure 15 examples go out the another exemplary method using electromagnetic tracking system detector portion and hand gesture.
Figure 16 A examples go out the AR with depth transducer, electromagnetic launcher and electromagnetic sensor according to another embodiment
The schematic diagram of the various parts of system.
Figure 16 B examples go out the AR with depth transducer, electromagnetic launcher and electromagnetic sensor according to yet another embodiment
The schematic diagram of the various parts of system and various visual fields.
Specific implementation mode
With reference to figure 2A to 2D, example goes out some universal component options.Detailed description after the discussion of Fig. 2A to 2D
In part, various systems, subsystem and component are presented, high quality, comfort are provided for realizing for the mankind VR and/or AR
The target for the display system known.
As shown in Figure 2 A, AR system users (60) are shown as dressing wear-type component (58), the wear-type component
(58) it is characterized in that frame (64) structure, the frame structure are coupled with the display system (62) in front of eyes of user.It raises
Sound device (66) configures shown in be coupled to frame (64) and is located near the duct of user (in one embodiment, separately
One loud speaker (not shown) is located at another duct of user nearby to provide stereo/shaping sound control).Display (62)
Operationally (such as pass through wired lead or wireless connection) and be coupled to (68) to processing locality and data module (70), is originally located in
Reason and data module (70) can be installed with various configurations, be such as fixedly attached on frame (64), such as the embodiment of Fig. 2 B
It is shown to be fixedly attached on the helmet or cap (80), it is embedded into headphone, with Backpack type as shown in the embodiment of Fig. 2 C
Configuration is removably attached to the trunk (82) of user (60), or is configured detachably with band coupling formula as shown in the embodiment of Fig. 2 D
Ground is attached to the buttocks (84) of user (60).
Processing locality and data module (70) may include high effect processor or controller and digital storage (such as
Flash memory), both of which can be used for aid in treatment, cache and storage following data:A) from can operationally coupling
Be connected to frame (64) sensor capture data, the sensor be, for example, image-capturing apparatus (such as camera), microphone,
Inertial Measurement Unit, accelerometer, compass, GPS unit, wireless device and/or gyroscope;And/or b) use remote processing
Module (72) and/or remote data repository (74) obtain and/or the data of processing, these data can in such processing or
It is sent to display (62) after retrieval.Processing locality and data module (70) can be such as via wired or wireless communication chains
Road is operationally coupled to (76,78) and arrives remote processing module (72) and remote data repository (74) so that these long-range moulds
Block (72,74) is operationally coupled to each other and can be used as the resource of processing locality and data module (70).
In one embodiment, remote processing module (72) may include one or more relatively powerful processor or control
Device processed, these processors or controller are configured as analyzing and handling data and/or image information.In one embodiment, far
Journey data storage bank (74) may include relatively large-scale numerical data storage facility, the facility can by internet or
Other network configurations in " cloud " resource distribution and can be used.In one embodiment, it is stored in processing locality and data module
All data and all calculating are executed, to allow from the entirely autonomous use of any far module.
Referring now to Figure 3, schematically example goes out the coordination between cloud computing assets (46) and processing locality assets, originally it is located in
Reason assets for example may reside in wear-type component (58) and processing locality and data module (70), wear-type component (58)
It is coupled to user's head (120), processing locality and data module (70) are coupled to the waistband (308 of user;Therefore component 70
It is referred to as " purse " 70), as shown in Figure 3.In one embodiment, cloud (46) assets (such as one or more servers
System (110)) such as (wirelessly it is preferred for mobility, wired spy for being preferred for need via wired or wireless network
Determine high bandwidth or high data transfers) operationally directly (40,42) coupling (115) is to local computing assets (such as institute as above
One or both of state processor and the memory configuration on the head (120) and waistband (308) that are coupled to user).User
These local calculating assets can also operationally be coupled to each other via wiredly and/or wirelessly connection configuration (44), all as follows
Wired coupling (68) that face is discussed with reference to figure 8.In one embodiment, it is used to remain mounted on the low of user's head (120)
Property and small size subsystem, the prevailing transmission between user and cloud (46) can be via the subsystem at waistband (308)
Link between cloud carries out, and wherein mainly using being wirelessly connected, (such as ultra wide band (" UWB ") connects wear-type (120) subsystem
Connect) by data tethers to the subsystem for being based on waistband (308), as current for example being used during the connection of individual calculus peripheral hardware is applied
Like that.
Coordinated by effective local and remote processing and for user appropriate display equipment (such as user interface or
User display system shown in Fig. 2A (62) or its modification), current reality or the relevant world of virtual location with user
Various aspects can be transmitted or " transmission " is updated to user and in an efficient way.It in other words, can be in storage location
The map in the continuous update world of place, the storage location can be partially residing in the AR systems of user and be partially residing in cloud
In resource.Map (also referred to as " transferable world model ") can be include raster image, 3-D and 2-D points, parameter information and
The large database of other information about real world.As more and more AR users constantly capture about their true
The information (for example, passing through camera, sensor, IMU etc.) of real environment, map become more and more accurate and perfect.
Wherein there is the world's mould that can reside on cloud computing resources and distribute from cloud computing resources by above-mentioned
The configuration of type, such world can be given one or more users with relatively low bandwidth " can transmit ", preferably attempt distribution
Real time video data etc..The enhancing experience of the people to stand near statue can pass through world's mould based on cloud (that is, as shown in Figure 1)
Type informs, the subset of the world model can pass to they and their local display device to complete view.Positioned at remote
Journey shows that the people by equipment can be equally simple in the personal computer on desk with image position, can effectively be downloaded from cloud identical
Simultaneously this partial information is presented in message part on their display.In fact, actually existing in the park near statue
One people may take a walk with the friend positioned at a distant place in that park, wherein the friend by virtual and augmented reality by
It is added.System by need to know the position in street, the position of trees, statue position, but the information is located on cloud, addition
Friend can download the various aspects of scene from cloud, then initially as relative to the artificial local increasing being physically located in park
Strong reality is taken a walk together.
3-D points can be captured from environment, and the posture for the camera for capturing these images or point can be determined (that is, opposite
Vector and/or origin position information in the world), so that these points or image can be " marked " with the pose information, or
It is associated with the pose information.It is then possible to determine the posture of second camera using the point captured by second camera.In other words
It says, second camera can be oriented and/or positioned based on the comparison with the labeled image from first camera.So
Afterwards, texture can be extracted using these knowledge, makes map, and create the virtual repetitions of real world (because around at that time
The camera being registered there are two).
Therefore, in base level, in one embodiment, 3-D points can be captured using the system that people dresses and generates this
The 2-D images put a bit, and these points and image can be sent out and go to cloud storage and process resource.They can also be with embedding
The pose information entered is at local cache (that is, image that caching is labeled);Therefore, cloud can have ready (that is, available
Caching in) labeled 2-D images (that is, labeled with 3-D postures) and 3-D points.If user is observing dynamically
Thing, he can also will be sent to cloud (for example, if watching the face of another people, Yong Huke with relevant information is moved
To shoot the texture maps of face and push it to optimization frequency, even if the world of surrounding is substantially static).Know about object
The more information of other device and transferable world model can be incorporated by reference into sequence number herein in entire contents
For 14/205,126, entitled " System and method for augmented and virtual reality (are used for
Enhancing and virtual reality system and method) " U.S. Patent application and below in relation to such as by being located at Florida State labor
It is found in the augmented reality system of those of the Qi Yue companies exploitation of De Daierbao and the additional disclosure of virtual reality system:Sequence
Number be 14/641,376 U.S. Patent application;The U.S. Patent application of Serial No. 14/555,585;Serial No. 14/212,
961 U.S. Patent application;The U.S. Patent application of sequence number 14/690,401;The United States Patent (USP) of Serial No. 13/663,466
Application;With the U.S. Patent application of Serial No. 13/684,489.
In order to capture the point that can be used for creating " transferable world model ", position of the user relative to the world is accurately known
It sets, posture and orientation are helpful.More particularly, the position of user must be by local to granularity, because knowing the head of user
Portion's posture and hand gesture (if user is just catching handheld unit, just making a gesture) may be very important.One
In a or multiple embodiments, GPS and other location informations are used as the input of this processing.User's head, totem, gesture,
The pin-point accuracy of haptic apparatus etc. is positioned for showing that virtual content appropriate is vital to user.
Realize that a kind of approach of high accuracy positioning can be related to the use of the electromagnetic field coupled with electromagnetic sensor, these electricity
Magnetic Sensor is strategically placed on the AR head devices, purse and/or other ancillary equipments of user (for example, totem, tactile
Equipment, game tool etc.) on.Electromagnetic tracking system generally includes at least one electromagnetic field transmitter and at least one electromagnetic field passes
Sensor.Sensor can measure the electromagnetic field with known distribution.Based on these measurements, determine field sensor relative to transmitter
Position and orientation.
Referring now to Figure 4, wherein example goes out electromagnetic tracking system (for example, by such as Johnson&Johnson
Biosense (RTM) department of Corporation is located at the Polhemus (RTM) of Vermont State Cole Chester, the group of Inc.
Exploitation is knitted, by this Sixense (RTM) Entertainment, Inc. and other tracking more than the Loews lid of California
Company manufacture electromagnetic tracking system) exemplary system diagram.In one or more embodiments, electromagnetic tracking system includes electricity
Magnetic field transmitter 402 is configured as transmitting known magnetic field.As shown in figure 4, electromagnetic field transmitter can be coupled to power supply
(for example, electric current, battery etc.) to transmitter 402 to provide electric power.
In one or more embodiments, electromagnetic field transmitter 402 includes generating several coils in magnetic field (for example, at least
Three positioning perpendicular to one another to generate the coil of field in the x, y and z directions).The magnetic field is used to set up coordinate space.This allows
System maps position of (map) sensor relative to known magnetic field, and assists in position and/or the orientation of sensor.One
In a or multiple embodiments, electromagnetic sensor 404a, 404b etc. can be attached on one or more real objects.Electromagnetism passes
Sensor 404 may include smaller coil can go out electric current in these coils by the electromagnetic field inducing emitted.In general,
" sensor " component (404) may include small coil or ring, such as in the small-scale structure of such as cube or other containers
The coil for one group of three different orientation (that is, such as orthogonally oriented relative to each other) being coupled together, these coils are determined
Position/orientation carrys out the incoming magnetic flux in the magnetic field that free transmitter (402) is emitted to capture, and by comparing via these coils
The electric current of induction, and know these coils relative positioning relative to each other and orientation, sensor can be calculated relative to hair
The relative position and orientation of emitter.
It can measure and be operably coupled to the coil and Inertial Measurement Unit (" IMU ") component of electromagnetism tracking transducer
The relevant one or more parameters of behavior, sent out relative to electromagnetic field with detection sensor (and object attached by sensor)
The position for the coordinate system that emitter is couple to and/or orientation.Certainly, which can be converted into world coordinate system, so as to
Determine electromagnetic field transmitter in position with the real world or posture.It in one or more embodiments, can be relative to electromagnetism
Transmitter detects position and the orientation of each sensor in coordinate space using multiple sensors.
It should be appreciated that in some embodiments, it may be possible to have been based on sensor on the wear-type component of AR systems and
The SLAM analyses executed based on the sensing data captured by wear-type AR systems and image data, have known head appearance
Gesture.However, may be important that the hand (for example, the handheld units such as totem) for knowing user relative to known head pose
Position.In other words, it may be important that the hand gesture known relative to head pose.Once head is (assuming that sensor
It is placed on wear-type component) relationship between hand calculates hand relative to the world's (example it is known that being easy with
Such as, world coordinates) position.
Electromagnetic tracking system can be provided in the position on three directions (that is, X, Y and Z-direction), and also provide two
Or the position in three angles of orientation.It in one or more embodiments, can be by the measurement result of the measurement result of IMU and coil
It is compared, to determine position and the orientation of sensor.In one or more embodiments, electromagnetism (EM) data and IMU data
The two and various other data sources (such as camera, depth transducer and other sensors) can be combined to determine position and
It is orientated.The information can arrive controller 406 by transmission (for example, wireless communication, bluetooth etc.).In one or more embodiments,
It can be in conventional systems with relatively high refresh rate report posture (or position and orientation).Traditionally, electromagnetic launcher is by coupling
It is connected to metastable blob, such as desk, operating table, wall or ceiling, and one or more sensors are coupled to
To smaller object, Medical Devices, handheld games component etc..Alternatively, as following described in reference diagram 6, electricity may be used
The various features of electromagnetic tracking system generate such configuration:It is tied up to relative to more stable world coordinates wherein it is possible to track
Change or the residual quantity of position and/or orientation between two objects moved in space;In other words, it is shown in Fig. 6 such
Configuration:Wherein, the variation of electromagnetic tracking system can be used for the position tracked between wear-type component and handheld unit and take
To residual quantity, while otherwise, such as by using the outside capture camera for the wear-type component that can be coupled to system
Figure (" SLAM ") technology is positioned and builds simultaneously, to determine the head relative to global coordinate system (such as room environment of user local)
Portion's posture.
Controller 406 can control electromagnetic field generator 402, and can also capture from various electromagnetic sensors 404
Data.It should be appreciated that the various parts of system can be coupled to each other by any electromechanical or wireless/bluetooth means.Controller
406 can also include the data about known magnetic field and with the relevant coordinate space in magnetic field.Then, which be used to detect
Position and orientation of the sensor relative to coordinate space corresponding with known electromagnetic field.
One advantage of electromagnetic tracking system is their tracking with minimum delay and high-resolution generation pin-point accuracy
As a result.In addition, electromagnetic tracking system not necessarily relies on optical tracker, and can easily track not in user's sight
Sensor/object.
It should be appreciated that the intensity of electromagnetic field v is as the distance r's away from coil transmitter (for example, electromagnetic field transmitter 402)
Cubic function and decline.Accordingly, it may be desirable to the algorithm based on the distance for leaving electromagnetic field transmitter.Controller 406 can be by
Configured with such algorithm, to determine in the position for leaving sensor/object at the different distance of electromagnetic field transmitter and take
To.Assuming that with moving away that electromagnetic launcher is farther and electromagnetic field intensity declines rapidly, can be realized at closer distance
Optimum in terms of accuracy, efficiency and low latency.In typical electromagnetic tracking system, electromagnetic field transmitter is by electric current
(for example, plug-in type power supply) powers, and with positioned at away from the sensor in 20 foot radius of electromagnetic field transmitter.Including AR
In many applications of application, the radius being more desirable between sensor and field emission device is shorter.
Referring now to Figure 5, schematically illustrating exemplary process diagram, which describes typical electrical magnetic tracking system
The function of system.At 502, emit known electromagnetic field.In one or more embodiments, magnetic field transmitter can generate magnetic field,
And each coil can generate electric field on (for example, x, y or z) in one direction.Magnetic field can be generated with random waveform.One
In a or multiple embodiments, each axis can be with slightly different hunting of frequency.At 504, it may be determined that correspond to electromagnetic field
Coordinate space.For example, the controller 406 of Fig. 4 can automatically determine the coordinate space around transmitter based on electromagnetic field.506
Place, can be with the behavior of the coil at detection sensor (can be attached in known object).Feel at coil for example, can calculate
The electric current answered.In other embodiments, the rotation with measuring coil or any other quantifiable behavior can be tracked.508
Place can use position and the orientation of behavior detection sensor and/or known object.It is reflected for example, controller 406 can be consulted
The behavior of coil at sensor and various positions or orientation are associated by firing table, the mapping table.It, can be with based on these calculating
Determine position in coordinate space and the orientation of sensor.In some embodiments, can be determined at sensor posture/
Location information.In other embodiments, sensor is by the data transmission detected at sensor to controller, and controller
Mapping table can be inquired to determine the pose information (for example, coordinate relative to handheld unit) relative to known magnetic field.
Under the background of AR systems, it may be necessary to change one or more components of electromagnetic tracking system in order to accurately
Track movable part.As described above, the head pose and orientation of tracking user are vital in many AR applications.With
The head pose at family and accurate determine being orientated allow AR systems to show correct virtual content to user.For example, virtual scene
May include being hidden in the subsequent monster of real building object.Posture and orientation according to user's head relative to building, may
The view for needing to change virtual monster is experienced in order to provide AR true to nature.Alternatively, totem, haptic apparatus or with virtual content hand over
The position of the other devices of some mutual and/or orientation are for enabling AR user that may be important with AR system interactions.For example,
In many game applications, AR systems must detect position and the orientation of the real object relative to virtual content.Alternatively, when aobvious
When showing virtual interface, it is necessary to know totem, the hand of user, haptic apparatus or be configured as any other with AR system interactions
Position of the real object relative to shown virtual interface, so as to system understanding order etc..Including optical tracking and other sides
The conventional mapping methods of method are usually perplexed by high latency and low resolution problem, this makes in many augmented realities are applied
It is challenging to render virtual content.
In one or more embodiments, the electromagnetic tracking system about Fig. 4 and Fig. 5 discussion may be adapted to AR systems, with
Position and orientation of the one or more objects of detection relative to the electromagnetic field emitted.Typical electromagnetic system tends to have big
And bulky electromagnetic launcher (for example, 402 in Fig. 4), this is problematic for AR equipment.However, smaller electromagnetism hair
Emitter (for example, within the scope of millimeter) can be used for emitting known electric magnetic field under the background of AR systems.
Referring now to Figure 6, electromagnetic tracking system can the AR systems with shown in combined, wherein electromagnetic field transmitter 602 make
A part for hand-held controller 606 is incorporated into.In one or more embodiments, hand-held controller can played
The totem used in scene.In other embodiments, hand-held controller can be haptic apparatus.In a further embodiment,
Electromagnetic field transmitter can be incorporated into simply as a part for purse 70.Hand-held controller 606 may include battery 610
Or other power supplys for the electromagnetic field transmitter 602 power supply.It should be appreciated that electromagnetic field transmitter 602 can also include or by coupling
650 components of IMU are connected to, 650 components of IMU are configured as assisting in position of the electromagnetic field transmitter 602 relative to other components
It sets and/or is orientated.All moveable in field emission device 602 and sensor (604), this is even more important.Such as the implementation of Fig. 6
Shown in example, electromagnetic field transmitter 602 is placed in hand-held controller rather than ensures that electromagnetic field transmitter will not in purse
Resource at contention purse, but the battery source of their own is used at hand-held controller 606.
In one or more embodiments, electromagnetic sensor (604) can be placed on user's head-wearing device (58) and
One or more positions on other sensor devices (such as one or more IMU or additional magnetic flux capture coil (608)).
For example, as shown in fig. 6, sensor (604,608) can be placed on the either side of head-wearing device (58).Due to these sensors
(604,608) are designed to be fairly small (therefore in some cases may be less sensitive), therefore can be with multiple sensors
Improve efficiency and precision.
In one or more embodiments, one or more sensors can also be placed on purse (620) or user's body
On any other position of body.Sensor (604,608) can wirelessly or by bluetooth and computing device (607,
Such as controller) communication, which determines sensor (604,608) (and AR head-wearing devices attached by them
(58) relative to the known magnetic field emitted by electromagnetic field transmitter (602)) posture and orientation.In one or more embodiments
In, computing device (607) may reside at purse (620).In other embodiments, computing device (607) may reside within
Head-wearing device (58) is originally in, or even is resided at hand-held controller (606).Computing device (607) can be with receiving sensor
The measurement result of (604,608), and determine that sensor (604,608) emits relative to by electromagnetic field transmitter (602)
Know position and the orientation of electromagnetic field.
In one or more embodiments, computing device (607) and may include mapping database (632;For example, can pass
World model, coordinate space for passing etc.) to detect posture, to determine the coordinate of real object and virtual objects, it might even be possible to even
It is connected to cloud resource (630) and transferable world model.Mapping database (632) can be consulted with determine sensor (604,
608) position coordinates.In some embodiments, mapping database (632) may reside in purse (620).It is shown in Fig. 6
Embodiment in, mapping database (632) resides in cloud resource (630).Computing device (607) is wirelessly provided with cloud
Source (630) communicates.Then identified pose information can be sent to cloud resource together with the point and image collected by AR systems
(630), it is then added to transferable world model (634).
As described above, traditional electromagnetic launcher may be too heavy for AR equipment.Therefore, with legacy system phase
Than that electromagnetic field transmitter can be designed to compact transmitter using smaller coil.It is assumed, however, that the intensity of electromagnetic field with
It the cubic function for the distance for leaving field emission device and reduces, then compared with legacy system (system being described in detail in such as Fig. 4), electricity
Shorter radius (for example, about 3-3.5 feet) between Magnetic Sensor 604 and electromagnetic field transmitter 602 can reduce power consumption.
In one or more embodiments, can be used to extend can device 606 and electromagnetic field emissions in order to control for this aspect
The service life for the battery 610 that device 602 is powered.Alternatively, in other embodiments, this aspect can be used for reducing in electromagnetic field transmitter
The size of the coil in magnetic field is generated at 602.However, in order to obtain identical magnetic field intensity, it may be necessary to increase power.This allows
Use the compact electromagnetic field transmitter unit 602 that can be compactly installed on hand-held controller 606.
When electromagnetic tracking system is used for AR equipment, several other changes can be made.Although the posture reporting rate phase
It is a good, but AR systems may need more efficient posture reporting rate.For this purpose, the posture based on IMU can be used in the sensor
Tracking.It is essential that IMU as must keep stable as possible, to improve the efficiency of posture detection process.IMU can be set
It is calculated as keeping stablizing within up to 50 to 100 milliseconds of the time.It should be appreciated that some embodiments can be estimated using external posture
Gauge module (that is, IMU can drift about at any time), the module make it possible to update with 10 to 20Hz rate reports posture.It is logical
Cross and keep IMU to stablize with rational rate, the newer rate of posture can be significantly reduced to 10 to 20Hz (in legacy system
Upper frequency compare).
If electromagnetic tracking system can be with 10% duty ratio operation (for example, the every 100 milliseconds only real facts
(ground truth) carries out ping operations), this will be the another way that power is saved at AR systems.This means that electromagnetism
Every 10 millisecond wake-up of the tracking system in every 100 milliseconds is primary, to generate pose estimation.This is converted into Save power consumption,
It may influence size, battery life and the cost of AR equipment again in turn.
It in one or more embodiments, can be by providing two hand-held controller (not shown) rather than only one
Hand-held controller strategically to utilize this reduction of duty ratio.For example, user may play two totems of needs
Game etc..Alternatively, in multi user game, two users can possess themselves totem/hand-held controller to play trip
Play.When using two controllers (for example, symmetrical controller for every hand) rather than when a controller, controller can be with
Operation is executed with the duty ratio of offset.For example, identical concept can also be applied to two different use by object for appreciation multi-player gaming
The controller that family uses.
Referring now to Figure 7, describing exemplary process diagram, which describes electromagnetism under the background of AR equipment
Tracking system.At 702, hand-held controller emits magnetic field.At 704, electromagnetic sensor (is placed on head-wearing device, waist
On packet etc.) detection magnetic field.At 706, the position of head-wearing device/waistband is determined based on the behavior of coil/IMU at sensor
It sets and is orientated.At 708, pose information is transmitted to computing device (for example, at purse or head-wearing device).At 710,
It is alternatively possible to consult mapping database (for example, transferable world model) to sit real-world coordinates and virtual world
Mark is associated.At 712, virtual content can be passed into user at AR head-wearing devices.It should be appreciated that above-mentioned flow chart
It is for illustration purposes only, should not be construed as limiting.
Advantageously, using the electromagnetic tracking system similar with the electromagnetic tracking system summarized in Fig. 6 can carry out posture with
Track (for example, head position and orientation, the position of totem and other controllers and orientation).Compared with optical tracking technology, this
AR systems are allowed to carry out projected virtual content with higher accuracy and low-down delay.
Referring now to Figure 8, illustrating system configuration, wherein the system configuration is characterized by many sensing parts.It wears
The wearable component of formula (58) is shown as operationally being coupled to (68) and arrives processing locality and data module (70), such as purse,
It is coupled used here as physics multicore lead, the lead is also with following reference chart 9A to the 9F controls described and quick release mould
Block (86) is characterized.Processing locality and data module (70) are operationally coupled to (100) and arrive handheld unit (606), here
It is coupled by the wireless connection of such as low-power bluetooth;Handheld unit (606) operationally can also be coupled directly
(94) the wearable component of wear-type (58) is arrived, such as is coupled by the wireless connection of such as low-power bluetooth.In general, passing
In the case of passing posture detection of the IMU data to coordinate various parts, high frequency is needed to connect, for example, hundreds of or thousands of periods/
In the range of second or higher;Few tens of cycles per second may be enough for electromagnetic location sensing, such as pass through sensor
(604) it is matched with transmitter (602).Global coordinate system (10) is also shown, indicates with the real world solid around user
Determine object, such as wall (8).Cloud resource (46) can also operationally be coupled to (42,40,88,90) and arrive processing locality respectively
With data module (70), the wearable component of wear-type (58) is arrived, to wall (8) can be coupled to or relative to global coordinate system
(10) resource of fixed other projects.It is coupled to wall (8) or with the known location relative to global coordinate system (10)
And/or the resource being orientated may include WiFi transceiver (114), electromagnetic launcher (602) and/or receiver (604), be configured as
Transmitting or reflection give the beacon or reflector (112) of the radiation of type, such as infrared LED beacon, cellular network transceiver
(110), radar transmitter or detector (108), laser radar transmitter or detector (106), GPS transceiver (118), have
The poster or marker and camera (124) of known detectable pattern (122).In addition to being configured as auxiliary camera (124) detection
Except the optical transmitting set (130) (such as the infrared transmitter (130) of infrared camera (124)) of device, the wearable portion of wear-type
Part (58) the also feature with the component similar with exemplified component;There are one also having on the wearable component of wear-type (58)
Or the feature of multiple deformeters (116), deformeter (116) can be fixedly coupled to the frame of the wearable component of wear-type (58)
On frame or mechanical platform, and it is configured to determine that and is located at such as electromagnetic receiver sensor (604) or display element (62) etc
Component between in this platform deflection, wherein whether may be valuable, such as flat if understanding platform to bend
It bends at the thinner part (part above nose on glasses shape platform such as shown in Fig. 8) of platform.Wear-type can be worn
Wear component (58) the also feature with processor (128) and one or more IMU (102).Each component is preferably operably
It is coupled to processor (128).Handheld unit (606) and processing locality are illustrated as having similar with data module (70)
Component feature.As shown in figure 8, due to so many sensing and attachment device, system may very heavy, power consumption, huge
And it is relatively expensive.However, for illustrative purpose, very high-caliber connection, system can be provided using such system
Component is integrated and position/orientation tracks.For example, using such configuration, various main movable parts (58,70,606) can be with
Using WiFi, GPS or cellular signal triangulation with respect to being positioned for the position of global coordinate system;Beacon, electromagnetism
Tracking (as described above), radar and laser radar system can provide further position and/or orientation information and feedback.Mark
Will object and camera may be alternatively used for providing the further information about opposite and absolute position and orientation.For example, various camera sections
Part (124) (being such as shown as being couple to the camera components of the wearable component of wear-type (58)) can be used for capture can be same
Shi Dingwei and the data used in figure agreement or " SLAM " are built, to determine that position where component (58) and component (58) are opposite
How to be orientated in other components.
Referring now to Fig. 9 A to 9F, the various aspects of control and quick release module (86) are shown.With reference to figure 9A, two
Case member is configured using magnetic couple and is coupled together, and magnetic couple configuration can be enhanced by mechanical caging.It can
To include for operating the button of associated system (136).Fig. 9 B examples go out partial sectional view, and button is shown
(136) and following top printed circuit board (138).With reference to figure 9C, in button (136) and following top printed circuit board
(138) in the case of being removed, it can be seen that female contact insert pin array (140).With reference to figure 9D, the counterpart in shell (134)
Divide in the case of being removed, it can be seen that lower printed circuit board (142).In the feelings that lower printed circuit board (142) is removed
Under condition, as shown in fig. 9e, it can be seen that public access touches pin array (144).With reference to the cross-sectional view of figure 9F, male plug needle or female plug needle
At least one of be configured as passing through spring loads so that they can be depressed along the longitudinal axis of each contact pin;These are inserted
Needle can be referred to as " spring contact pin ", and generally include high conductive material, such as copper or gold.When assembled, exemplified
Configuration is so that male plug needle is fitted close with female plug needle 46, and entire component can be pulled open manually and overcome magnetic
Interface (146) load and be fast released decoupling half, magnetic interface (146) load can use pin array (140,
144) the north and south magnet that is oriented around periphery and formed.In one embodiment, it is formed by compressing 46 spring contact pins
The load of about 2kg is offset by the closure maintenance energy of about 4kg.Contact pin in the array can be separated about 1.3mm, these
Contact pin can operationally be coupled to various types of conducting wires, such as twisted-pair feeder or other combinations to support USB 3.0, HDMI
2.0, I2S signals, GPIO and MIPI configurations, in one embodiment, for up to about 4 amperes/5 volts configuration high current moulds
Quasi- line and ground wire.
With reference to figure 10, there is the components/features collection minimized so as to minimize the weight and volume of various parts, and
And the wear-type component (for example, component (58) in such as Figure 10) for reaching opposite elongate is helpful.Therefore, it is possible to use
The various arrangements and combination of various parts exemplified by Fig. 8.
With reference to figure 11A, electromagnetism senses coil block (604, i.e. 3 independent coils with shell coupling) and is shown as coupling
To wear-type component (58);Such to be configured to entire component and increase additional geometry, this may be to be not desirable to.
With reference to figure 11B, coil is not contained in box or single housing as the configuration of Figure 11 A, it can be by each line
Circle is integrated into the various structures of wear-type component (58), as shown in Figure 11 B.For example, x-axis circle (148) can be placed on
In a part for wear-type component (58) (for example, center of frame).Similarly, y-axis circle (150), which can be placed on, wears
(58 in another part of formula component;For example, any bottom side of frame).Similarly, z-axis line circle (152) can be placed on head
It wears in the another part of formula component (58) (for example, any top side of frame).
Figure 12 A go out to 12E examples is coupled to electromagnetic sensor to improve the FERRITE CORE of field sensitivity for characterizing
Various configurations.With reference to figure 12A, FERRITE CORE can be solid cube (1202).Although solid cube (1202) exists
May be most effective in terms of improving field sensitivity, but compared with remaining configuration shown in Figure 12 B to 12E, it is also likely to be
It is most heavy.With reference to figure 12B, multiple ferrite disks (1204) can be coupled to electromagnetic sensor.Similarly, with reference to figure 12C,
Solid cube (1206) with uniaxial air-core can be coupled to electromagnetic sensor.It as indicated in fig. 12 c, can be along one
A axis forms open space (that is, air-core) in solid cube.This can reduce cubical weight, while still provide
Necessary field sensitivity.In another embodiment, with reference to figure 12D, the solid cube (1208) with three axis air-cores can be with
It is coupled to electromagnetic sensor.In this configuration, solid cube is hollowed out along all three axis, to be substantially reduced cube
The weight of body.With reference to figure 12E, the ferrite bar (1210) with plastic casing can also be coupled to electromagnetic sensor.It should
Understand, the solid core configuration of the embodiment of Figure 12 B to 12E than Figure 12 A in terms of weight is light, and as described above, can be used for saving
Quality.
With reference to figure 13A to 13C, time division multiplexing (" TDM ") may be alternatively used for saving quality.For example, with reference to figure 13A,
Show that traditional local data for 3 coil electromagnetism receiver sensors handles configuration, wherein analog current comes from X, Y and Z-line
Each of (1302,1304,1306) are enclosed, into preamplifier (1308), into bandpass filter (1310), PA
(1312), by analog-to-digital conversion (1314), digital signal processor (1316) is eventually entered into.It is configured with reference to the transmitter of figure 13B
It is configured with the receiver of Figure 13 C, hardware can be shared using time division multiplexing, so that each coil pickoff chain does not need it
The amplifier etc. of oneself.This can be realized by TDM switches 1320, as shown in Figure 13 B, help to use identical Hardware Subdivision
Part group (amplifier etc.) processing from the signal of going to multiple transmitters and receiver.In addition to removing sensor housing and multichannel
Multiplexing, can also be by improving signal-to-noise ratio, every group of electricity to save except hardware spending with the electromagnetic sensor more than one group
Magnetic Sensor is relatively small relative to single larger coil group;It is usually required with multiple close furthermore, it is possible to improve
Sensing coil downside frequency limit in order to bandwidth demand promoted.Additionally, there are the compromises with multiplexing, this is because
Multiplexing would generally extensive radio frequency signal in time reception, this leads to usually dirtier signal;Therefore, multiplex system
It may need the coil diameter of bigger.For example, the cubic coils sensor of the sides 9mm size may be needed in multiplex system
In the case of box, non-multiplexed system may only need the cubic coils box of the sides 7mm size and obtain similar performance;Cause
, there is compromise in terms of minimizing geometry and quality in this.
Particular system component (such as wear-type component (58)) has two or more magnetic coil pickups wherein
In another embodiment of group, which can be configured as matching optionally by hithermost sensor each other and transmitter,
To optimize the performance of system.
With reference to figure 14, in one embodiment, after user is to his or her wearable computing system power-up (160),
Wear-type element can capture the combination of IMU and camera data, and (camera data is used for for example such as there may be more
Positive SLAM analyses at the purse processor of mostly original processing horsepower), to determine and update relative to real world global coordinate system
Head pose (that is, position and orientation) (162).User can also activate handheld unit for example to play augmented reality game
(164), and handheld unit may include being operably coupled to the electricity of one or both of purse and wear-type component
Magnetic ejector (166).One or more electromagnetic field coil receiver groups with the coupling of wear-type component are (that is, every group of 3 differences take
To independent coil) capture the magnetic flux from transmitter, which can be used for determining wear-type component and hand-held portion
Position between part or orientation difference (or " residual quantity ") (168).Assist in the wear-type of the posture relative to global coordinate system
Component is permitted with combination of the handheld unit relative to the relative position of wear-type component and the handheld unit of orientation is assisted in
Perhaps system substantially determines position of each component relative to global coordinate system, therefore can track the head pose of user and hold
Posture is preferably tracked with relatively low delay, augmented reality figure is presented thereby using the movement and rotation of handheld unit
As feature and interaction (170).
With reference to figure 15, example go out with some similar embodiments of the embodiment of Figure 14, only system have more can be used for
Assist in sensor device and the configuration of the posture of wear-type component (172) and handheld unit (176,178), therefore can be with
The head pose of track user and hand-held posture are preferably tracked with relatively low delay, thereby using the movement of handheld unit
Augmented reality characteristics of image and interaction (180) is presented with rotation.
Specifically, after user is to his or her wearable computing system power-up (160), wear-type component captures
The combination of IMU and camera data is analyzed for SLAM, to determine and update the head relative to real world global coordinate system
Portion's posture.The system can be further configured to detection environment in other locating resources presence, such as Wi-Fi, honeycomb,
Beacon, radar, laser radar, GPS, marker and/or can be with the various aspects of global coordinate system or one or more removable
The associated other cameras (172) of dynamic component.
User can also activate handheld unit and be played (174) with for example playing augmented reality, and handheld unit can be with
Electromagnetic launcher (176) including being operably coupled to one or both of purse and/or wear-type component.It can also be with
Similar mode uses other locating resources.With one or more electromagnetic field coil receiver group (examples of wear-type component coupling
Such as, the independent coil of every group of 3 different orientations) it can be used for capturing the magnetic flux from electromagnetic launcher.The magnetic flux of the capture
Amount can be used for determining position or orientation difference (or " residual quantity ") (178) between wear-type component and handheld unit.
It therefore, can be with the head pose of relatively low delay tracking user and hand-held posture, to use hand-held portion
The movement or rotation of part come present AR contents and/or with the interaction (180) of AR systems.
With reference to figure 16A and 16B, the various aspects of the configuration similar with the configuration of Fig. 8 are shown.The configuration of Figure 16 A and Fig. 8
Configuration the difference is that, other than the depth transducer of laser radar (106) type, the configuration of Figure 16 A also has
General depth camera for illustration purposes or depth transducer (154), such as can be that stereotriangulation formula depth passes
Sensor (such as passive type stereo depth sensor, texture project stereoscopic depth transducer or structured light stereo depth sensor)
Or flight time formula depth transducer (such as laser radar depth transducer or modulate emission depth transducer);In addition, Figure 16 A
Configuration have additional forward direction " world " camera (124, can be gray scale camera, with 720p range resolutions sensing
Device) and relatively it is high-resolution " picture camera " (156, such as can be full-color camera, there is 2 mega pixels or more high score
The sensor of resolution).Figure 16 B examples go out the partial orthogonality view of the configuration of Figure 16 A for explanatory purposes, such as below with reference to
Figure 16 B are further described.
Referring back to Figure 16 A and three-dimensional and flight time formula depth transducer above-mentioned, these depth transducer classes
Each in type can be used together with wearable computing solution disclosed herein, although each depth transducer class
Type all has various merits and demerits.For example, many depth transducers have black surface and glittering or reflecting surface challenge.
Passive type three-dimensional depth sensing is to carry out triangulation using depth camera or sensor to calculate the relatively simple side of depth
Formula may then have challenge, and may need relatively important computing resource but if needing wide visual field (" FOV ");This
Outside, this sensor type may be with the challenge of edge detection, this may be extremely important for specific use-case at hand.Passively
Formula solid may be with the challenge of texture-free wall, low lighting condition and repeat patterns.Passive type stereo depth sensor can be from
Such as Intel (RTM) and the manufacturer of Aquifi (RTM) obtain.Solid with texture projection is (also referred to as " active vertical
Body ") it is similar to passive type solid, but projection pattern is broadcast to environmentally by texture projecting apparatus, and the texture broadcasted is more,
It is higher for obtainable accuracy in the triangulation of depth calculation.Active solid may also need relatively high calculating
There is challenge when needing width FOV in resource, and some are not good enough in terms of detecting edge, but it solves passive type and stands really
Some challenges of body show good, and usually not repeat patterns because it is effective to texture-free wall under low illumination
Problem.Active stereo depth sensor can be obtained from such as Intel (RTM) and the manufacturer of Aquifi (RTM).With knot
(such as by Primesense, Inc. (RTM) exploitations can be simultaneously the solid of structure light with what trade name Kinect (RTM) was obtained
System, and can be from Mantis Vision, the system of Inc. (RTM) acquisitions) usually using single camera/projecting apparatus pairing, and
Projecting apparatus is special dedicated, because it is configured as broadcasting the dot pattern of known priori.Substantially, system knows the figure of broadcast
Case, and it knows that the variable to be determined is depth.Such configuration can be relative efficiency in terms of calculated load, and
Exist in terms of the scenes of pattern that can be in wide FOV demands scene and with ambient light and from other neighbouring device broadcasts and chooses
War, but in many scenes can be highly effective and efficient.Using modulation flight time type depth transducer (such as
Can be obtained from PMD Technologies (RTM), A.G. and SoftKinetic Inc. (RTM)), transmitter can be configured as
Send out the wave with AM light, such as sine wave;It can be positioned in the camera components being nearby even overlapped in some configurations
Return signal is received in each pixel of camera components, and can determine/calculate depth map.Such configuration is in geometry
On can be with relative compact, accuracy is high and calculated load is low, but in image resolution ratio (such as in the edge of object), more
(such as wherein sensor aims at reflection or glittering turning to tracking error, and detector finally receives more than one return
Path, to which there are some depth detection aliasings) there may be challenges for aspect.Direct time-of-flight sensor is (also referred to as
Above-mentioned laser radar) it can be from the confessions such as such as LuminAR (RTM) and Advanced Scientific Concepts, Inc. (RTM)
Quotient is answered to obtain.Utilize these flight time configuration, it will usually send out light pulse (as picosecond, nanosecond or femtosecond length light pulse) with
It is washed in the world being orientated around it by the illumination;Then each pixel on camera sensor waits for the pulse to return, and
And know the light velocity, the distance at each pixel can be calculated.Such configuration may have the flight time sensor configuration of modulation
Many advantages (without baseline, FOV is relatively wide, high accuracy, calculated load are relatively low), and with relatively high frame speed
Rate, such as reach tens thousand of hertz.They may also be relatively expensive, and resolution ratio is relatively low, sensitive to light, is susceptible to multipath
Error;They may also be relatively large and again.
With reference to figure 16B, partial top view for illustration purposes is shown, it is characterised in that the eyes (12) and tool of user
There is the camera (14, such as infrared camera) of visual field (28,30) and is directed toward eyes (12) in order to eyes tracking, observation and/or figure
As the light source or radiation source (16, such as infrared) captured.Show that three outside worlds capture camera (124), they have it
FOV (18,20,22), depth camera (154) and its FOV (24) and picture camera (156) and its FOV (26) are also such.It can
To support the depth information obtained from depth camera (154) by using overlapping FOV and the data from other forward direction cameras.
For example, system may finally obtain such as sub- VGA images from depth transducer (154), from world's camera (124)
720p images, and the 2 mega pixel coloured images from picture camera (156) are obtained once in a while.There are five such configuration tools
The camera of common FOV is shared, wherein three have heterogeneous visible spectrum image, one has color, another has opposite
The depth of low resolution.The system can be configured as to be split in gray scale and coloured image, merge these images and from
They make relatively high-resolution image, obtain some stereoscopic correspondences, are provided about three-dimensional depth using depth transducer
It is assumed that and more accurate depth map is obtained using stereoscopic correspondence, which, which is substantially better than, only obtains from depth transducer
Depth map.Such process can be run on locally removable processing hardware, or can be run using cloud computing resources,
It may can also be started shipment with the data one of other in the region (the two neighbouring people for being such as sitting in desk opposite each other)
Row, finally obtains fairly precise mapping.In another embodiment, all the sensors can be combined into an integrated sensing
Device is to realize this function.
It there is described herein the various exemplary embodiments of the present invention.These examples are referred in non-limiting sense.It provides
These examples are to illustrate the wider application aspect of the present invention.It can be the case where not departing from the spirit and scope of the present invention
Under, various changes and replaceable equivalent can be carried out to described invention.Furthermore, it is possible to carry out many modifications are so that specific
Situation, material, the composition of substance, process, one or more process actions or one or more steps are adapted to the one of the present invention
A or multiple purposes, spirit or scope.In addition, as it will appreciated by a person of ordinary skill, do not depart from the scope of the present invention or
In the case of spirit, each in each modification described and illustrated in this has component and feature of separation, can be with
Separately or in combination with any feature in other several embodiments easily.All such modifications are intended in related to the disclosure
In the scope of the claims of connection.
The present invention includes the method that subject apparatus can be used to execute.This method may include providing this suitable device
Action.This offer can be executed by terminal user.In other words, it is only necessary to terminal users to obtain, visit for " offer " action
Ask, approach, positioning, being arranged, activating, opening or providing in other ways necessary device in the method.Side described herein
Method can be carried out by any sequence of the logically possible event and according to recorded event sequence.
The illustrative aspect of the present invention and the details about material selection and manufacture have been elaborated above.About this hair
Bright other details, can in conjunction with above-mentioned reference patent and publication and it is commonly known in the art or understand
To understand these.It is same in terms of the additional move usually or in logic utilized in terms of basic methods according to the present invention
It can set up.
In addition, though the present invention is described by reference to several examples for being optionally incorporated into various features, but it is of the invention
It is not limited to the invention of the description or instruction conceived for each modification of the present invention.Do not depart from the present invention true spirit and
In the case of range, various changes can be carried out to the described present invention, and can be with substitute equivalents (in order to succinctly rise
See, no matter whether including herein).In addition, in the case where providing the range of value, it should be understood that the range the upper limit and
Each median between lower limit and any other described or median in the range are included in the present invention.
In addition, it is envisaged that any optional feature for deforming of the described present invention can independently or with it is described herein
Any one or more of feature be combined to state and prescription.It includes that there may be identical entries to quote singular item
Plural number.More specifically, as herein be associated with used in claims, singulative " one ", "one", " described " and "the"
Including plural reference, unless expressly stated otherwise,.In other words, foregoing description and with the associated claim of the disclosure
In, allow the "at least one" target item using article.It is further to be noted that it is any to exclude to draft this claim
Optional element.Therefore, in conjunction with claim elements or use it is " negative " limit, this statement be intended as use " individually ",
The antecedent basis of exclusiveness term such as " only ".
Without using this exclusiveness term, the term " comprising " in claim associated with the disclosure
It should allow to include any additional elements, not consider that the element for whether listing given quantity in this claim or addition are special
Sign is considered the property for changing the element described in the claims.Other than being specifically defined herein, it should keep
All technical and scientific terms as used herein are given while claim validity is widely generally understood meaning as far as possible.
The range of the present invention is not limited to the embodiment and/or subject specification that provide, but only by associated with the disclosure
Claim language range limit.
Claims (143)
1. a kind of augmented reality (AR) display system, including:
Electromagnetic field transmitter is used to emit known magnetic field in known coordinate system;
Electromagnetic sensor is used to measure related to magnetic flux in the electromagnetic sensor is generated from the known magnetic field
Parameter;
Depth transducer is used to measure the distance in the known coordinate system;
Controller, be used to be based at least partially on measured by the electromagnetic sensor with the relevant parameter of magnetic flux and
The distance measured by the depth transducer determines the electromagnetic sensor described in the known coordinate system relative to described
The pose information of electromagnetic field transmitter;And
Display system is used to be based at least partially on the appearance of the electromagnetic sensor relative to the electromagnetic field transmitter
Gesture information shows virtual content to user.
2. AR display systems according to claim 1, wherein the depth transducer is passive type stereo depth sensor.
3. AR display systems according to claim 1, wherein the depth transducer is active depth transducer.
4. AR display systems according to claim 3, wherein the depth transducer is texture project stereoscopic depth sensing
Device.
5. AR display systems according to claim 3, wherein the depth transducer is structured light project stereoscopic depth
Sensor.
6. AR display systems according to claim 3, wherein the depth transducer is flight time depth transducer.
7. AR display systems according to claim 3, wherein the depth transducer is laser radar depth transducer.
8. AR display systems according to claim 3, wherein the depth transducer is modulate emission depth transducer.
9. AR display systems according to claim 1, wherein the depth transducer includes having the first visual field (FOV)
Depth camera.
10. AR display systems according to claim 9 further comprise that the world captures camera,
The wherein described world, which captures camera, has the 2nd Chong Die with the first FOV at least partly FOV.
11. AR display systems according to claim 10, further comprise picture camera,
The wherein described picture camera has the 3rd Chong Die with the first FOV and the 2nd FOV at least partly FOV.
12. AR display systems according to claim 11, wherein the depth camera, the world capture camera and described
Picture camera has corresponding different first resolution, second resolution and third resolution ratio.
13. AR display systems according to claim 12, wherein the first resolution of the depth camera is Asia
VGA, the second resolution that the world captures camera are 720p, and the third resolution ratio of the picture camera is 200
Ten thousand pixels.
14. AR display systems according to claim 11, wherein the depth camera, the world capture camera and described
Picture camera is configured as capturing corresponding first image, the second image and third image.
15. AR display systems according to claim 14, wherein the controller is programmed to divide second image
With the third image.
16. AR display systems according to claim 15, wherein the controller is programmed to dividing second figure
Second image and the third image are merged to generate blending image after picture and the third image.
17. AR display systems according to claim 16, wherein the distance measured in the known coordinate system includes:
Imaginary distance is generated by analyzing the described first image from the depth camera;And
The distance is generated by analyzing the imaginary distance and the blending image.
18. AR display systems according to claim 11, wherein the depth camera, the world capture camera and described
Picture camera forms single integrated sensor.
19. AR display systems according to claim 1, further comprise additional locating resource to provide additional information,
In be based at least partially on measured by the electromagnetic sensor with the relevant parameter of magnetic flux, by the depth transducer
The distance measured and the additional information provided by the additional locating resource, determine in the known coordinate system
The pose information of the electromagnetic sensor relative to the electromagnetic field transmitter.
20. AR display systems according to claim 19, wherein the additional locating resource includes WiFi transceiver.
21. AR display systems according to claim 19, wherein the additional locating resource includes additional electromagnetic transmitter.
22. AR display systems according to claim 19, wherein the additional locating resource includes additional electromagnetic sensor.
23. AR display systems according to claim 19, wherein the additional locating resource includes beacon.
24. AR display systems according to claim 23, wherein the beacon emissions radiate.
25. AR display systems according to claim 24, wherein the radiation is infra-red radiation, and the wherein described beacon
Including infrared LED.
26. AR display systems according to claim 19, wherein the additional locating resource includes reflector.
27. AR display systems according to claim 26, wherein the reflector reflected radiation.
28. AR display systems according to claim 19, wherein the additional locating resource includes cellular network transceiver.
29. AR display systems according to claim 19, wherein the additional locating resource includes radar transmitter.
30. AR display systems according to claim 19, wherein the additional locating resource includes radar detector.
31. AR display systems according to claim 19, wherein the additional locating resource includes laser radar transmitter.
32. AR display systems according to claim 19, wherein the additional locating resource includes lidar detectors.
33. AR display systems according to claim 19, wherein the additional locating resource includes GPS transceiver.
34. AR display systems according to claim 19, wherein the additional locating resource includes having known can detect
The poster of pattern.
35. AR display systems according to claim 19, wherein the additional locating resource includes having known can detect
The marker of pattern.
36. AR display systems according to claim 19, wherein the additional locating resource includes Inertial Measurement Unit.
37. AR display systems according to claim 19, wherein the additional locating resource includes deformeter.
38. AR display systems according to claim 1, wherein the electromagnetic field transmitter is coupled to AR displays system
The movable part of system.
39. according to the AR display systems described in claim 38, wherein the movable part is handheld unit.
40. AR display systems according to claim 39, wherein the movable part is totem.
41. according to the AR display systems described in claim 38, wherein the movable part is to accommodate the display system
Wear-type component.
42. according to the AR display systems described in claim 38, wherein the movable part is the wearable component of trunk.
43. AR display systems according to claim 42, wherein the wearable component of the trunk is purse.
44. AR display systems according to claim 1, wherein the electromagnetic field transmitter is coupled to the known coordinate
Object in system so that the electromagnetic field transmitter has known location and known orientation.
45. AR display systems according to claim 44, wherein the electromagnetic sensor is coupled to AR displays system
The movable part of system.
46. AR display systems according to claim 45, wherein the movable part is handheld unit.
47. AR display systems according to claim 46, wherein the movable part is totem.
48. AR display systems according to claim 45, wherein the movable part is to accommodate the display system
Wear-type component.
49. AR display systems according to claim 45, wherein the movable part is the wearable component of trunk.
50. AR display systems according to claim 49, wherein the wearable component of the trunk is purse.
51. AR display systems according to claim 1, wherein the pose information is included in institute in the known coordinate system
State position and orientation of the electromagnetic sensor relative to the electromagnetic field transmitter.
52. AR display systems according to claim 1, wherein the controller analyzes the pose information with described in determination
Position and orientation of the electromagnetic sensor in the known coordinate system.
53. a kind of method for showing augmented reality, the method includes:
Emit known magnetic field in known coordinate system using electromagnetic field transmitter;
It is measured using electromagnetic sensor relevant with the magnetic flux at the electromagnetic sensor that is generated from the known magnetic field
Parameter;
The distance in the known coordinate system is measured using depth transducer;
Be based at least partially on measured using the electromagnetic sensor with the relevant parameter of magnetic flux and use the depth
The distance of sensor measurement is spent, determines that the electromagnetic sensor described in the known coordinate system is sent out relative to the electromagnetic field
The pose information of emitter;And
It is based at least partially on the pose information of the electromagnetic sensor relative to the electromagnetic field transmitter, it is aobvious to user
Show virtual content.
54. method according to claim 53, wherein the depth transducer is passive type stereo depth sensor.
55. method according to claim 53, wherein the depth transducer is active depth transducer.
56. method according to claim 55, wherein the depth transducer is texture project stereoscopic depth transducer.
57. method according to claim 55, wherein the depth transducer is structured light project stereoscopic depth sensing
Device.
58. method according to claim 55, wherein the depth transducer is flight time depth transducer.
59. method according to claim 55, wherein the depth transducer is laser radar depth transducer.
60. method according to claim 55, wherein the depth transducer is modulate emission depth transducer.
61. method according to claim 53, wherein the depth transducer includes the depth with the first visual field (FOV)
Camera.
62. method according to claim 61, wherein the depth transducer further comprises that the world captures camera,
The wherein described world, which captures camera, has the 2nd Chong Die with the first FOV at least partly FOV.
63. method according to claim 62, wherein the depth transducer further comprises picture camera,
The wherein described picture camera has the 3rd Chong Die with the first FOV and the 2nd FOV at least partly FOV.
64. method according to claim 63, wherein the depth camera, the world capture camera and the picture phase
Machine has corresponding different first resolution, second resolution and third resolution ratio.
65. method according to claim 64, wherein the first resolution of the depth camera is Asia VGA, it is described
The second resolution that the world captures camera is 720p, and the third resolution ratio of the picture camera is 2 mega pixels.
66. method according to claim 63 further comprises capturing camera and figure using corresponding depth camera, the world
The first image of piece cameras capture, the second image and third image.
67. method according to claim 66 further comprises dividing second image and the third image.
68. method according to claim 67, further comprise divide second image and the third image it
After merge second image and the third image to generate blending image.
69. method according to claim 68, wherein the distance measured in the known coordinate system includes:
Imaginary distance is generated by analyzing the described first image from the depth camera;And
The distance is generated by analyzing the imaginary distance and the blending image.
70. method according to claim 63, wherein the depth camera, the world capture camera and the picture phase
Machine forms single integrated sensor.
71. method according to claim 53, further comprises:It is based at least partially on and is surveyed using the electromagnetic sensor
The distance of amount measured with the relevant parameter of magnetic flux, using the depth transducer and by adding locating resource
The additional information of offer determines institute of the electromagnetic sensor relative to the electromagnetic field transmitter described in the known coordinate system
State pose information.
72. method according to claim 71, wherein the additional locating resource includes WiFi transceiver.
73. method according to claim 71, wherein the additional locating resource includes additional electromagnetic transmitter.
74. method according to claim 71, wherein the additional locating resource includes additional electromagnetic sensor.
75. method according to claim 71, wherein the additional locating resource includes beacon.
76. according to the method described in claim 75, further comprise the beacon of transmitting radiation.
77. according to the method described in claim 76, wherein the radiation is infra-red radiation, and the wherein described beacon includes red
Outer LED.
78. method according to claim 71, wherein the additional locating resource includes reflector.
79. according to the method described in claim 78, further comprise the reflector of reflected radiation.
80. method according to claim 71, wherein the additional locating resource includes cellular network transceiver.
81. method according to claim 71, wherein the additional locating resource includes radar transmitter.
82. method according to claim 71, wherein the additional locating resource includes radar detector.
83. method according to claim 71, wherein the additional locating resource includes laser radar transmitter.
84. method according to claim 71, wherein the additional locating resource includes lidar detectors.
85. method according to claim 71, wherein the additional locating resource includes GPS transceiver.
86. method according to claim 71, wherein the additional locating resource includes having known detectable pattern
Poster.
87. method according to claim 71, wherein the additional locating resource includes having known detectable pattern
Marker.
88. method according to claim 71, wherein the additional locating resource includes Inertial Measurement Unit.
89. method according to claim 71, wherein the additional locating resource includes deformeter.
90. method according to claim 53, wherein the electromagnetic field transmitter is coupled to the removable of AR display systems
Dynamic component.
91. according to the method described in claim 90, wherein the movable part is handheld unit.
92. according to the method described in claim 91, wherein the movable part is totem.
93. according to the method described in claim 90, wherein the movable part is the wear-type component for accommodating display system.
94. according to the method described in claim 90, wherein the movable part is the wearable component of trunk.
95. according to the method described in claim 94, wherein the wearable component of the trunk is purse.
96. method according to claim 53, wherein the electromagnetic field transmitter is coupled in the known coordinate system
Object so that the electromagnetic field transmitter have known location and known orientation.
97. according to the method described in claim 96, wherein the electromagnetic sensor is coupled to the removable of AR display systems
Component.
98. according to the method described in claim 97, wherein the movable part is handheld unit.
99. according to the method described in claim 98, wherein the movable part is totem.
100. according to the method described in claim 97, wherein the movable part is the wear-type portion for accommodating display system
Part.
101. according to the method described in claim 97, wherein the movable part is the wearable component of trunk.
102. according to the method described in claim 101, wherein the wearable component of the trunk is purse.
103. method according to claim 53, wherein the pose information is included in electricity described in the known coordinate system
Position and orientation of the Magnetic Sensor relative to the electromagnetic field transmitter.
104. method according to claim 53 further comprises analyzing the pose information with the determination electromagnetic sensing
Position and orientation of the device in the known coordinate system.
105. a kind of augmented reality display system, including:
Handheld unit, is coupled to electromagnetic field transmitter, and the electromagnetic field transmitter emits magnetic field;
There is wear-type component the display system that virtual content is shown to user, the wear-type component to be coupled to electromagnetism
Sensor, the electromagnetic sensor measure and the relevant ginseng of the magnetic flux at the electromagnetic sensor from magnetic field generation
Number, wherein the head pose of wear-type component described in known coordinate system is known;
Depth transducer measures the distance in the known coordinate system;And
Controller is communicatedly coupled to the handheld unit, the wear-type component and the depth transducer, described
Controller is received from the wear-type component with the relevant parameter of magnetic flux at the electromagnetic sensor and from described
Depth transducer receives the distance,
The wherein described controller be based at least partially on measured by the electromagnetic sensor with the relevant parameter of magnetic flux
The distance with being measured by the depth transducer, determines the hand gesture of the handheld unit,
The wherein described system is based at least partially on the hand gesture and changes the virtual content shown to the user.
106. according to the AR display systems described in claim 105, passed wherein the depth transducer is passive type three-dimensional depth
Sensor.
107. according to the AR display systems described in claim 105, wherein the depth transducer is active depth transducer.
108. according to the AR display systems described in claim 107, wherein the depth transducer is texture project stereoscopic depth
Sensor.
109. according to the AR display systems described in claim 107, wherein the depth transducer is structured light project stereoscopic
Depth transducer.
110. according to the AR display systems described in claim 107, wherein the depth transducer is flight time depth sensing
Device.
111. according to the AR display systems described in claim 107, wherein the depth transducer is laser radar depth sensing
Device.
112. according to the AR display systems described in claim 107, wherein the depth transducer is modulate emission depth sensing
Device.
113. according to the AR display systems described in claim 105, wherein the depth transducer includes having the first visual field
(FOV) depth camera.
114. according to the AR display systems described in claim 113, further comprise that the world captures camera,
The wherein described world, which captures camera, has the 2nd Chong Die with the first FOV at least partly FOV.
115. according to the AR display systems described in claim 114, further comprise picture camera,
The wherein described picture camera has the 3rd Chong Die with the first FOV and the 2nd FOV at least partly FOV.
116. according to the AR display systems described in claim 115, wherein the depth camera, the world capture camera and institute
Stating picture camera has corresponding different first resolution, second resolution and third resolution ratio.
117. according to the AR display systems described in claim 116, wherein the first resolution of the depth camera is Asia
VGA, the second resolution that the world captures camera are 720p, and the third resolution ratio of the picture camera is 200
Ten thousand pixels.
118. according to the AR display systems described in claim 115, wherein the depth camera, the world capture camera and institute
Picture camera is stated to be configured as capturing corresponding first image, the second image and third image.
119. according to the AR display systems described in claim 118, wherein the controller is programmed to divide second figure
Picture and the third image.
120. according to the AR display systems described in claim 119, wherein the controller is programmed in segmentation described second
Second image and the third image are merged to generate blending image after image and the third image.
121. according to the AR display systems described in claim 120, wherein the distance measured in the known coordinate system includes:
Imaginary distance is generated by analyzing the described first image from the depth camera;And
The distance is generated by analyzing the imaginary distance and the blending image.
122. according to the AR display systems described in claim 115, wherein the depth camera, the world capture camera and institute
It states picture camera and forms single integrated sensor.
123. according to the AR display systems described in claim 105, further comprise additional locating resource to provide additional information,
The wherein described controller be based at least partially on measured by the electromagnetic sensor with the relevant parameter of magnetic flux, by institute
The additional information stated the distance of depth transducer measurement and provided by the additional locating resource, determines the hand
Hold the hand gesture of formula component.
124. according to the AR display systems described in claim 123, wherein the additional locating resource includes WiFi transceiver.
125. according to the AR display systems described in claim 123, wherein the additional locating resource includes additional electromagnetic transmitting
Device.
126. according to the AR display systems described in claim 123, wherein the additional locating resource includes additional electromagnetic sensing
Device.
127. according to the AR display systems described in claim 123, wherein the additional locating resource includes beacon.
128. according to the AR display systems described in claim 127, wherein the beacon emissions radiate.
129. according to the AR display systems described in claim 128, wherein the radiation is infra-red radiation, and the wherein described letter
Mark includes infrared LED.
130. according to the AR display systems described in claim 123, wherein the additional locating resource includes reflector.
131. according to the AR display systems described in claim 130, wherein the reflector reflected radiation.
132. according to the AR display systems described in claim 123, wherein the additional locating resource includes cellular network transmitting-receiving
Device.
133. according to the AR display systems described in claim 123, wherein the additional locating resource includes radar transmitter.
134. according to the AR display systems described in claim 123, wherein the additional locating resource includes radar detector.
135. according to the AR display systems described in claim 123, wherein the additional locating resource includes laser radar transmitting
Device.
136. according to the AR display systems described in claim 123, wherein the additional locating resource includes laser radar detection
Device.
137. according to the AR display systems described in claim 123, wherein the additional locating resource includes GPS transceiver.
138. according to the AR display systems described in claim 123, wherein the additional locating resource includes that can be examined with known
The poster of mapping case.
139. according to the AR display systems described in claim 123, wherein the additional locating resource includes that can be examined with known
The marker of mapping case.
140. according to the AR display systems described in claim 123, wherein the additional locating resource includes Inertial Measurement Unit.
141. according to the AR display systems described in claim 123, wherein the additional locating resource includes deformeter.
142. according to the AR display systems described in claim 105, wherein the handheld unit is totem.
143., according to the AR display systems described in claim 105, exist wherein the hand gesture includes the handheld unit
Position in the known coordinate system and orientation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210650785.1A CN114995647A (en) | 2016-02-05 | 2017-02-06 | System and method for augmented reality |
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662292185P | 2016-02-05 | 2016-02-05 | |
US62/292,185 | 2016-02-05 | ||
US201662298993P | 2016-02-23 | 2016-02-23 | |
US62/298,993 | 2016-02-23 | ||
US15/062,104 US20160259404A1 (en) | 2015-03-05 | 2016-03-05 | Systems and methods for augmented reality |
US15/062,104 | 2016-03-05 | ||
PCT/US2017/016722 WO2017136833A1 (en) | 2016-02-05 | 2017-02-06 | Systems and methods for augmented reality |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210650785.1A Division CN114995647A (en) | 2016-02-05 | 2017-02-06 | System and method for augmented reality |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108700939A true CN108700939A (en) | 2018-10-23 |
CN108700939B CN108700939B (en) | 2022-07-05 |
Family
ID=59501080
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201780010073.0A Active CN108700939B (en) | 2016-02-05 | 2017-02-06 | System and method for augmented reality |
CN202210650785.1A Pending CN114995647A (en) | 2016-02-05 | 2017-02-06 | System and method for augmented reality |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210650785.1A Pending CN114995647A (en) | 2016-02-05 | 2017-02-06 | System and method for augmented reality |
Country Status (8)
Country | Link |
---|---|
EP (1) | EP3411779A4 (en) |
JP (2) | JP2019505926A (en) |
KR (1) | KR20180110051A (en) |
CN (2) | CN108700939B (en) |
AU (1) | AU2017214748B9 (en) |
CA (1) | CA3011377A1 (en) |
IL (3) | IL301449B1 (en) |
WO (1) | WO2017136833A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110223686A (en) * | 2019-05-31 | 2019-09-10 | 联想(北京)有限公司 | Audio recognition method, speech recognition equipment and electronic equipment |
CN111596756A (en) * | 2019-02-21 | 2020-08-28 | 脸谱科技有限责任公司 | Tracking a position of a portion of a device based on detection of a magnetic field |
CN111652261A (en) * | 2020-02-26 | 2020-09-11 | 南开大学 | Multi-modal perception fusion system |
CN111830444A (en) * | 2019-04-16 | 2020-10-27 | 阿森松技术公司 | Determining position and orientation from a Helmholtz apparatus |
CN113194818A (en) * | 2018-10-30 | 2021-07-30 | 波士顿科学国际有限公司 | Apparatus and method for body cavity treatment |
CN113539039A (en) * | 2021-09-01 | 2021-10-22 | 山东柏新医疗制品有限公司 | Training device and method for drug application operation of male dilatation catheter |
CN114882773A (en) * | 2022-05-24 | 2022-08-09 | 华北电力大学(保定) | Magnetic field learning system based on Augmented Reality |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016141373A1 (en) | 2015-03-05 | 2016-09-09 | Magic Leap, Inc. | Systems and methods for augmented reality |
US10838207B2 (en) | 2015-03-05 | 2020-11-17 | Magic Leap, Inc. | Systems and methods for augmented reality |
US10180734B2 (en) | 2015-03-05 | 2019-01-15 | Magic Leap, Inc. | Systems and methods for augmented reality |
KR20180090355A (en) | 2015-12-04 | 2018-08-10 | 매직 립, 인코포레이티드 | Recirculation systems and methods |
EP3494549A4 (en) | 2016-08-02 | 2019-08-14 | Magic Leap, Inc. | Fixed-distance virtual and augmented reality systems and methods |
US10812936B2 (en) | 2017-01-23 | 2020-10-20 | Magic Leap, Inc. | Localization determination for mixed reality systems |
JP7055815B2 (en) | 2017-03-17 | 2022-04-18 | マジック リープ, インコーポレイテッド | A mixed reality system that involves warping virtual content and how to use it to generate virtual content |
KR102366781B1 (en) | 2017-03-17 | 2022-02-22 | 매직 립, 인코포레이티드 | Mixed reality system with color virtual content warping and method for creating virtual content using same |
CA3054617A1 (en) | 2017-03-17 | 2018-09-20 | Magic Leap, Inc. | Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same |
CN107807738B (en) * | 2017-12-04 | 2023-08-15 | 成都思悟革科技有限公司 | Head motion capturing system and method for VR display glasses |
US10558260B2 (en) | 2017-12-15 | 2020-02-11 | Microsoft Technology Licensing, Llc | Detecting the pose of an out-of-range controller |
CN108269310A (en) * | 2018-03-20 | 2018-07-10 | 公安部上海消防研究所 | A kind of interactive exhibition system, method and device |
EP3827299A4 (en) | 2018-07-23 | 2021-10-27 | Magic Leap, Inc. | Mixed reality system with virtual content warping and method of generating virtual content using same |
CN117711284A (en) | 2018-07-23 | 2024-03-15 | 奇跃公司 | In-field subcode timing in a field sequential display |
JP7445642B2 (en) * | 2018-08-13 | 2024-03-07 | マジック リープ, インコーポレイテッド | cross reality system |
EP4286894A3 (en) | 2018-09-05 | 2024-01-24 | Magic Leap, Inc. | Directed emitter/sensor for electromagnetic tracking in augmented reality systems |
CN113196209A (en) | 2018-10-05 | 2021-07-30 | 奇跃公司 | Rendering location-specific virtual content at any location |
US11353588B2 (en) | 2018-11-01 | 2022-06-07 | Waymo Llc | Time-of-flight sensor with structured light illuminator |
EP4046401A4 (en) | 2019-10-15 | 2023-11-01 | Magic Leap, Inc. | Cross reality system with wireless fingerprints |
EP4059007A4 (en) | 2019-11-12 | 2023-11-01 | Magic Leap, Inc. | Cross reality system with localization service and shared location-based content |
US11562542B2 (en) | 2019-12-09 | 2023-01-24 | Magic Leap, Inc. | Cross reality system with simplified programming of virtual content |
WO2021163306A1 (en) | 2020-02-13 | 2021-08-19 | Magic Leap, Inc. | Cross reality system with accurate shared maps |
US11830149B2 (en) | 2020-02-13 | 2023-11-28 | Magic Leap, Inc. | Cross reality system with prioritization of geolocation information for localization |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2142338C (en) * | 1992-08-14 | 1999-11-30 | John Stuart Bladen | Position location system |
CN101530325A (en) * | 2008-02-29 | 2009-09-16 | 韦伯斯特生物官能公司 | Location system with virtual touch screen |
US20110238399A1 (en) * | 2008-11-19 | 2011-09-29 | Elbit Systems Ltd. | System and a method for mapping a magnetic field |
EP2887311A1 (en) * | 2013-12-20 | 2015-06-24 | Thomson Licensing | Method and apparatus for performing depth estimation |
US20160026253A1 (en) * | 2014-03-11 | 2016-01-28 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
CN205007551U (en) * | 2015-08-19 | 2016-02-03 | 深圳游视虚拟现实技术有限公司 | Human -computer interaction system based on virtual reality technology |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2358682A1 (en) * | 1992-08-14 | 1994-03-03 | British Telecommunications Public Limited Company | Position location system |
JP2001208529A (en) * | 2000-01-26 | 2001-08-03 | Mixed Reality Systems Laboratory Inc | Measuring apparatus, control method thereof and memory medium |
US20010056574A1 (en) * | 2000-06-26 | 2001-12-27 | Richards Angus Duncan | VTV system |
US20070155589A1 (en) * | 2002-12-04 | 2007-07-05 | Philip Feldman | Method and Apparatus for Operatively Controlling a Virtual Reality Scenario with an Isometric Exercise System |
JP2007134785A (en) * | 2005-11-08 | 2007-05-31 | Konica Minolta Photo Imaging Inc | Head mounted video display apparatus |
KR20090055803A (en) * | 2007-11-29 | 2009-06-03 | 광주과학기술원 | Method and apparatus for generating multi-viewpoint depth map, method for generating disparity of multi-viewpoint image |
US8405680B1 (en) * | 2010-04-19 | 2013-03-26 | YDreams S.A., A Public Limited Liability Company | Various methods and apparatuses for achieving augmented reality |
JP6202981B2 (en) * | 2013-10-18 | 2017-09-27 | 任天堂株式会社 | Information processing program, information processing apparatus, information processing system, and information processing method |
US20150358539A1 (en) * | 2014-06-06 | 2015-12-10 | Jacob Catt | Mobile Virtual Reality Camera, Method, And System |
-
2017
- 2017-02-06 EP EP17748352.6A patent/EP3411779A4/en not_active Withdrawn
- 2017-02-06 WO PCT/US2017/016722 patent/WO2017136833A1/en active Application Filing
- 2017-02-06 CN CN201780010073.0A patent/CN108700939B/en active Active
- 2017-02-06 IL IL301449A patent/IL301449B1/en unknown
- 2017-02-06 AU AU2017214748A patent/AU2017214748B9/en active Active
- 2017-02-06 JP JP2018540434A patent/JP2019505926A/en not_active Withdrawn
- 2017-02-06 CA CA3011377A patent/CA3011377A1/en active Pending
- 2017-02-06 CN CN202210650785.1A patent/CN114995647A/en active Pending
- 2017-02-06 KR KR1020187025638A patent/KR20180110051A/en not_active Application Discontinuation
-
2018
- 2018-07-16 IL IL260614A patent/IL260614A/en unknown
-
2021
- 2021-10-13 JP JP2021168082A patent/JP7297028B2/en active Active
-
2022
- 2022-06-09 IL IL293782A patent/IL293782B2/en unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2142338C (en) * | 1992-08-14 | 1999-11-30 | John Stuart Bladen | Position location system |
CN101530325A (en) * | 2008-02-29 | 2009-09-16 | 韦伯斯特生物官能公司 | Location system with virtual touch screen |
US20110238399A1 (en) * | 2008-11-19 | 2011-09-29 | Elbit Systems Ltd. | System and a method for mapping a magnetic field |
EP2887311A1 (en) * | 2013-12-20 | 2015-06-24 | Thomson Licensing | Method and apparatus for performing depth estimation |
US20160026253A1 (en) * | 2014-03-11 | 2016-01-28 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
CN205007551U (en) * | 2015-08-19 | 2016-02-03 | 深圳游视虚拟现实技术有限公司 | Human -computer interaction system based on virtual reality technology |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113194818A (en) * | 2018-10-30 | 2021-07-30 | 波士顿科学国际有限公司 | Apparatus and method for body cavity treatment |
CN111596756A (en) * | 2019-02-21 | 2020-08-28 | 脸谱科技有限责任公司 | Tracking a position of a portion of a device based on detection of a magnetic field |
CN111830444A (en) * | 2019-04-16 | 2020-10-27 | 阿森松技术公司 | Determining position and orientation from a Helmholtz apparatus |
CN110223686A (en) * | 2019-05-31 | 2019-09-10 | 联想(北京)有限公司 | Audio recognition method, speech recognition equipment and electronic equipment |
CN111652261A (en) * | 2020-02-26 | 2020-09-11 | 南开大学 | Multi-modal perception fusion system |
CN113539039A (en) * | 2021-09-01 | 2021-10-22 | 山东柏新医疗制品有限公司 | Training device and method for drug application operation of male dilatation catheter |
CN114882773A (en) * | 2022-05-24 | 2022-08-09 | 华北电力大学(保定) | Magnetic field learning system based on Augmented Reality |
Also Published As
Publication number | Publication date |
---|---|
CN114995647A (en) | 2022-09-02 |
IL301449B1 (en) | 2024-02-01 |
CN108700939B (en) | 2022-07-05 |
AU2017214748B9 (en) | 2021-05-27 |
JP2022002144A (en) | 2022-01-06 |
IL293782A (en) | 2022-08-01 |
IL260614A (en) | 2018-08-30 |
CA3011377A1 (en) | 2017-08-10 |
WO2017136833A1 (en) | 2017-08-10 |
EP3411779A1 (en) | 2018-12-12 |
JP7297028B2 (en) | 2023-06-23 |
IL293782B1 (en) | 2023-04-01 |
IL293782B2 (en) | 2023-08-01 |
AU2017214748A1 (en) | 2018-08-02 |
EP3411779A4 (en) | 2019-02-20 |
AU2017214748B2 (en) | 2021-05-06 |
IL301449A (en) | 2023-05-01 |
JP2019505926A (en) | 2019-02-28 |
KR20180110051A (en) | 2018-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10678324B2 (en) | Systems and methods for augmented reality | |
US11429183B2 (en) | Systems and methods for augmented reality | |
CN108700939A (en) | System and method for augmented reality | |
US11403827B2 (en) | Method and system for resolving hemisphere ambiguity using a position vector | |
US11256090B2 (en) | Systems and methods for augmented reality | |
NZ735802A (en) | Traffic diversion signalling system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |