US20230360242A1 - System and method for concurrent odometry and mapping - Google Patents
System and method for concurrent odometry and mapping Download PDFInfo
- Publication number
- US20230360242A1 US20230360242A1 US18/224,414 US202318224414A US2023360242A1 US 20230360242 A1 US20230360242 A1 US 20230360242A1 US 202318224414 A US202318224414 A US 202318224414A US 2023360242 A1 US2023360242 A1 US 2023360242A1
- Authority
- US
- United States
- Prior art keywords
- electronic device
- feature descriptors
- environment
- module
- pose
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013507 mapping Methods 0.000 title claims abstract description 64
- 238000000034 method Methods 0.000 title claims description 21
- 230000033001 locomotion Effects 0.000 claims abstract description 83
- 230000000007 visual effect Effects 0.000 claims abstract description 53
- 230000004807 localization Effects 0.000 claims abstract description 42
- 238000003384 imaging method Methods 0.000 description 25
- 230000008901 benefit Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 9
- 238000004458 analytical method Methods 0.000 description 6
- 238000005457 optimization Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000003190 augmentative effect Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000010191 image analysis Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012897 Levenberg–Marquardt algorithm Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000010561 standard procedure Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/038—Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
- G01S7/4817—Constructional features, e.g. arrangements of optical elements relating to scanning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/156—Mixing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/207—Image signal generators using stereoscopic image cameras using a single 2D image sensor
- H04N13/214—Image signal generators using stereoscopic image cameras using a single 2D image sensor using spectral multiplexing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
Definitions
- the present disclosure relates generally to imagery capture and processing and more particularly to machine vision using captured imagery.
- Machine vision and display techniques such as simultaneous localization and mapping (SLAM), visual inertial odometry (VIO), area learning applications, augmented reality (AR), and virtual reality (VR), often rely on the identification of objects within the local environment of a device through the analysis of imagery of the local environment captured by the device.
- SLAM simultaneous localization and mapping
- VIO visual inertial odometry
- AR augmented reality
- VR virtual reality
- the device navigates an environment while simultaneously constructing a map of the environment or augmenting an existing map or maps of the environment.
- conventional techniques for tracking motion while building a map of the environment typically take a relatively significant amount of time and resources and accumulate errors, thereby limiting the utility and effectiveness of the machine vision techniques.
- FIG. 1 is a diagram illustrating an electronic device configured to estimate a pose of an electronic device in a local environment using pose information generated based on non-image sensor data and image sensor data while building a map of the local environment in accordance with at least one embodiment of the present disclosure.
- FIG. 2 is a diagram illustrating a concurrent odometry and mapping module of the electronic device of FIG. 1 configured to estimate a current pose of the electronic device and update a map of the environment to localize the estimated current pose in accordance with at least one embodiment of the present disclosure.
- FIG. 3 is a diagram illustrating a motion tracking module of the concurrent odometry and mapping module of FIG. 2 configured to generate feature descriptors of spatial features of objects in the environment for updating a map of the environment and estimating a pose of the electronic device in accordance with at least one embodiment of the present disclosure.
- FIG. 4 is a diagram illustrating a mapping module of the concurrent odometry and mapping module of FIG. 2 configured to generate and add to a three-dimensional representation of the environment of the electronic device based on generated feature descriptors and a plurality of stored maps in accordance with at least one embodiment of the present disclosure.
- FIG. 5 is a diagram illustrating a localization module of the concurrent odometry and mapping module of FIG. 2 configured to generate a localized pose of the electronic device in accordance with at least one embodiment of the present disclosure.
- FIG. 6 is a flow diagram illustrating an operation of an electronic device to track motion and update a three-dimensional representation of the environment in accordance with at least one embodiment of the present disclosure.
- FIGS. 1 - 6 illustrate various techniques for tracking motion of an electronic device in an environment while building a three-dimensional visual representation of the environment that is used to correct drift in the tracked motion.
- a front-end motion tracking module receives sensor data from visual, inertial, and depth sensors and tracks motion (i.e., estimates poses over time) of the electronic device that can be used by an application programming interface (API).
- the front-end motion tracking module estimates poses over time based on feature descriptors corresponding to the visual appearance of spatial features of objects in the environment and estimates the three-dimensional positions (referred to as 3D point positions) of the spatial features.
- the front-end motion tracking module also provides the captured feature descriptors and estimated device pose to a back-end mapping module.
- the back-end mapping module is configured to store a plurality of maps based on stored feature descriptors, and to periodically receive additional feature descriptors and estimated device poses from the front-end motion tracking module as they are generated while the electronic device moves through the environment.
- the back-end mapping module builds a three-dimensional visual representation (map) of the environment based on the stored plurality of maps and the received feature descriptors.
- the back-end mapping module provides the three-dimensional visual representation of the environment to a localization module, which compares generated feature descriptors to stored feature descriptors from the stored plurality of maps, and identifies correspondences between stored and observed feature descriptors.
- the localization module performs a loop closure by minimizing the discrepancies between matching feature descriptors to compute a localized pose.
- the localized pose corrects drift in the estimated pose generated by the front-end motion tracking module, and is periodically sent to the front-end motion tracking module for output to the API.
- the electronic device can perform highly accurate motion tracking and map-building of an environment even with constrained resources, allowing the electronic device to record a representation of the environment and therefore recognize re-visits to the same environment over multiple sessions.
- the front-end motion tracking module maintains only a limited history of tracked motion (e.g., tracked motion data for only a single prior session, or a single prior time period) and treats any previously-generated feature point position estimates as fixed, thus limiting the computational burden of calculating an estimated current pose and 3D point positions and thus enabling the front-end tracking module to update the estimated current pose at a relatively high rate.
- the back-end mapping module maintains a more extensive history of the 3D point positions in the environment and poses of the electronic device, thus enabling the back-end mapping module to build a more accurate three-dimensional representation of the environment based on the stored maps and the observed feature descriptors received from the front-end motion tracking module.
- the back-end mapping module carries a heavier computational burden to build the three-dimensional representation of the environment based on a plurality of stored maps and to update the three-dimensional representation based on periodic inputs of additional generated feature descriptors from the front-end motion tracking module
- the back-end mapping module updates the three-dimensional representation of the environment at a relatively slow rate.
- the localization module optimizes the three-dimensional representation and estimated current pose by solving a co-optimization algorithm that treats previously-generated 3D point positions as variable. The localization module thus corrects drift in the estimated current pose to generate a localized pose, and sends the localized pose to the front-end motion tracking module for output to the API.
- FIG. 1 illustrates an electronic device 100 configured to support location-based functionality, such as SLAM, VR, or AR, using image and non-visual sensor data in accordance with at least one embodiment of the present disclosure.
- the electronic device 100 can include a user-portable mobile device, such as a tablet computer, computing-enabled cellular phone (e.g., a “smartphone”), a head-mounted display (HMD), a notebook computer, a personal digital assistant (PDA), a gaming system remote, a television remote, and the like.
- the electronic device 100 can include another type of mobile device, such as an automobile, robot, remote-controlled drone or other airborne device, and the like.
- the electronic device 100 is generally described herein in the example context of a mobile device, such as a tablet computer or a smartphone; however, the electronic device 100 is not limited to these example implementations.
- the electronic device 100 includes a housing 102 having a surface 104 opposite another surface 106 .
- the surfaces 104 and 106 are substantially parallel and the housing 102 further includes four side surfaces (top, bottom, left, and right) between the surface 104 and surface 106 .
- the housing 102 may be implemented in many other form factors, and the surfaces 104 and 106 may have a non-parallel orientation.
- the electronic device 100 includes a display 108 disposed at the surface 106 for presenting visual information to a user 110 .
- the surface 106 is referred to herein as the “forward-facing” surface and the surface 104 is referred to herein as the “user-facing” surface as a reflection of this example orientation of the electronic device 100 relative to the user 110 , although the orientation of these surfaces is not limited by these relational designations.
- the electronic device 100 includes a plurality of sensors to obtain information regarding a local environment 112 of the electronic device 100 .
- the electronic device 100 obtains visual information (imagery) for the local environment 112 via imaging sensors 114 and 116 and a depth sensor 120 disposed at the forward-facing surface 106 and an imaging sensor 118 disposed at the user-facing surface 104 .
- the imaging sensor 114 is implemented as a wide-angle imaging sensor having a fish-eye lens or other wide-angle lens to provide a wider-angle view of the local environment 112 facing the surface 106 .
- the imaging sensor 116 is implemented as a narrow-angle imaging sensor having a typical angle of view lens to provide a narrower angle view of the local environment 112 facing the surface 106 .
- the imaging sensor 114 and the imaging sensor 116 are also referred to herein as the “wide-angle imaging sensor 114 ” and the “narrow-angle imaging sensor 116 ,” respectively.
- the wide-angle imaging sensor 114 and the narrow-angle imaging sensor 116 can be positioned and oriented on the forward-facing surface 106 such that their fields of view overlap starting at a specified distance from the electronic device 100 , thereby enabling depth sensing of objects in the local environment 112 that are positioned in the region of overlapping fields of view via image analysis.
- the imaging sensor 118 can be used to capture image data for the local environment 112 facing the surface 104 . Further, in some embodiments, the imaging sensor 118 is configured for tracking the movements of the head 122 or for facial recognition, and thus providing head tracking information that may be used to adjust a view perspective of imagery presented via the display 108 .
- the depth sensor 120 uses a modulated light projector 119 to project modulated light patterns from the forward-facing surface 106 into the local environment, and uses one or both of imaging sensors 114 and 116 to capture reflections of the modulated light patterns as they reflect back from objects in the local environment 112 .
- modulated light patterns can be either spatially-modulated light patterns or temporally-modulated light patterns.
- the captured reflections of the modulated light patterns are referred to herein as “depth imagery.”
- the depth sensor 120 then may calculate the depths of the objects, that is, the distances of the objects from the electronic device 100 , based on the analysis of the depth imagery.
- the resulting depth data obtained from the depth sensor 120 may be used to calibrate or otherwise augment depth information obtained from image analysis (e.g., stereoscopic analysis) of the image data captured by the imaging sensors 114 and 116 .
- the depth data from the depth sensor 120 may be used in place of depth information obtained from image analysis.
- image analysis e.g., stereoscopic analysis
- multiview analysis typically is more suited for bright lighting conditions and when the objects are relatively distant, whereas modulated light-based depth sensing is better suited for lower light conditions or when the observed objects are relatively close (e.g., within 4-5 meters).
- the electronic device 100 may elect to use multiview-based reconstruction to determine object depths.
- the electronic device 100 may switch to using modulated light-based depth sensing via the depth sensor 120 .
- the electronic device 100 also may rely on non-visual pose information for pose detection.
- This non-visual pose information can be obtained by the electronic device 100 via one or more non-visual sensors (not shown in FIG. 1 ), such as an IMU including one or more gyroscopes, magnetometers, and accelerometers.
- the IMU can be employed to generate pose information along multiple axes of motion, including translational axes, expressed as X, Y, and Z axes of a frame of reference for the electronic device 100 , and rotational axes, expressed as roll, pitch, and yaw axes of the frame of reference for the electronic device 100 .
- the non-visual sensors can also include ambient light sensors and location sensors, such as GPS sensors, or other sensors that can be used to identify a location of the electronic device 100 , such as one or more wireless radios, cellular radios, and the like.
- the electronic device 100 includes a concurrent odometry and mapping module 150 to track motion of the electronic device 100 based on the image sensor data 134 , 136 and the non-image sensor data 142 and to build a three-dimensional representation of the local environment 112 .
- the concurrent odometry and mapping module 150 periodically updates the three-dimensional representation of the local environment 112 with feature descriptors generated based on the image sensor data and the non-visual sensor data.
- the concurrent odometry and mapping module 150 uses the updated three-dimensional representation of the local environment 112 to correct drift and other pose errors in the tracked motion.
- the electronic device 100 uses the image sensor data and the non-visual sensor data to track motion (estimate a pose) of the electronic device 100 .
- the electronic device 100 determines an initial estimated pose based on geolocation data, other non-visual sensor data, visual sensor data as described further below, or a combination thereof.
- the non-visual sensors generate, at a relatively high rate, non-visual pose information reflecting the changes in the device pose.
- the visual sensors capture images that also reflect device pose changes. Based on this non-visual and visual pose information, the electronic device 100 updates the initial estimated pose to reflect a current estimated pose, or tracked motion, of the device.
- the electronic device 100 generates visual pose information based on the detection of spatial features in image data captured by one or more of the imaging sensors 114 , 116 , and 118 .
- the local environment 112 includes a hallway of an office building that includes three corners 124 , 126 , and 128 , a baseboard 130 , and an electrical outlet 132 .
- the user 110 has positioned and oriented the electronic device 100 so that the forward-facing imaging sensors 114 and 116 capture wide angle imaging sensor image data 134 and narrow angle imaging sensor image data 136 , respectively, that includes these spatial features of the hallway.
- the depth sensor 120 also captures depth data 138 that reflects the relative distances of these spatial features relative to the current pose of the electronic device 100 .
- the user-facing imaging sensor 118 captures image data representing head tracking data 140 for the current pose of the head 122 of the user 110 .
- Non-visual sensor data 142 such as readings from the IMU, also is collected by the electronic device 100 in its current pose.
- the electronic device 100 can determine an estimate of its relative pose, or tracked motion, without explicit absolute localization information from an external source.
- the electronic device 100 can perform analysis of the wide-angle imaging sensor image data 134 and the narrow-angle imaging sensor image data 136 to determine the distances between the electronic device 100 and the corners 124 , 126 , 128 .
- the depth data 138 obtained from the depth sensor 120 can be used to determine the distances of the spatial features. From these distances the electronic device 100 can triangulate or otherwise infer its relative position in the office represented by the local environment 112 .
- the electronic device 100 can identify spatial features present in one set of captured images of the image data 134 and 136 , determine the initial distances to these spatial features, and then track the changes in position and distances of these spatial features in subsequent captured imagery to determine the change in pose of the electronic device 100 in a free frame of reference.
- certain non-visual sensor data such as gyroscopic data or accelerometer data, can be used to correlate spatial features observed in one image with spatial features observed in a subsequent image.
- the electronic device 100 uses the image data and the non-visual data to generate feature descriptors for the spatial features identified in the captured imagery.
- Each of the generated feature descriptors describes the orientation, gravity direction, scale, and other aspects of one or more of the identified spatial features.
- the generated feature descriptors are compared to a set of stored descriptors (referred to for purposes of description as “known feature descriptors”) of a plurality of stored maps of the local environment 112 that each identifies previously identified spatial features and their corresponding poses.
- each of the known feature descriptors is a descriptor that has previously been generated, and its pose definitively established, by either the electronic device 100 or another electronic device.
- the estimated device poses, 3D point positions, and known feature descriptors can be stored at the electronic device 100 , at a remote server (which can combine data from multiple electronic devices) or other storage device, or a combination thereof. Accordingly, the comparison of the generated feature descriptors can be performed at the electronic device 100 , at the remote server or other device, or a combination thereof.
- a generated feature descriptor is compared to a known feature descriptor by comparing each aspect of the generated feature descriptor (e.g., the orientation of the corresponding feature, the scale of the corresponding feature, and the like) to the corresponding aspect of the known feature descriptor and determining an error value indicating the variance between the compared features.
- each aspect of the generated feature descriptor e.g., the orientation of the corresponding feature, the scale of the corresponding feature, and the like
- the electronic device 100 can identify an error value for the orientation aspect of the feature descriptors by calculating the difference between the vectors A and B.
- the error values can be combined according to a specified statistical technique, such as a least squares technique, to identify a combined error value for each known feature descriptor being compared, and the matching known feature descriptor identifies as the known feature descriptor having the smallest combined error value.
- a specified statistical technique such as a least squares technique
- Each of the known feature descriptors includes one or more fields identifying the point position of the corresponding spatial feature and camera poses from which the corresponding spatial feature was seen.
- a known feature descriptor can include pose information indicating the location of the spatial feature within a specified coordinate system (e.g., a geographic coordinate system representing Earth) within a specified resolution (e.g., 1 cm), the orientation of the point of view of the spatial feature, the distance of the point of view from the feature and the like.
- the observed feature descriptors are compared to the feature descriptors stored in the map to identify multiple matched known feature descriptors.
- the matched known feature descriptors are then stored together with non-visual pose data as localization data that can be used both to correct drift in the tracked motion (or estimated pose) of the electronic device 100 and to augment the plurality of stored maps of a local environment for the electronic device 100 .
- the matching process will identify multiple known feature descriptors that match corresponding observed feature descriptors, thus indicating that there are multiple features in the local environment of the electronic device 100 that have previously been identified.
- the corresponding positions of the matching known feature descriptors may vary, indicating that the electronic device 100 is not in a particular one of the poses indicated by the matching known feature descriptors.
- the electronic device 100 may refine its estimated pose by interpolating its pose between the poses indicated by the matching known feature descriptors and the pose computed using conventional interpolation techniques.
- the electronic device 100 may snap its estimated pose to the pose indicated by the known feature descriptors.
- the concurrent odometry and mapping module 150 generates estimated poses (i.e., tracks motion) of the electronic device 100 at a relatively high rate based on the image sensor data 134 , 136 and the non-image sensor data 142 for output to an API.
- the concurrent odometry and mapping module 150 also generates feature descriptors based on the image sensor data and the non-visual sensor data.
- the concurrent odometry and mapping module 150 stores a plurality of maps containing known feature descriptors, from which it builds a three-dimensional representation of the local environment 112 .
- the concurrent odometry and mapping module 150 uses the known feature descriptors to map the local environment.
- the concurrent odometry and mapping module 150 can use the known feature descriptors to generate a map file that indicates the position of each feature included in the known feature descriptors in a frame of reference for the electronic device 100 .
- the concurrent odometry and mapping module 150 generates new feature descriptors based on the image sensor data and the non-visual sensor data, it periodically augments the three-dimensional representation of the local environment 112 by matching the generated feature descriptors to the known feature descriptors.
- the concurrent odometry and mapping module 150 uses the three-dimensional representation of the environment 112 to periodically correct drift in the tracked motion.
- the concurrent odometry and mapping module 150 generates a locally accurate estimated pose for output to the API at a relatively high frequency, and periodically corrects global drift in the estimated pose to generate a localized pose using the three-dimensional representation of the local environment 112 .
- the estimated and localized poses can be used to support any of a variety of location-based services.
- the estimated and localized poses can be used to generate a virtual reality environment, or portion thereof, representing the local environment of the electronic device 100 .
- FIG. 2 illustrates the components of a concurrent odometry and mapping module 250 of the electronic device 100 of FIG. 1
- the concurrent odometry and mapping module 250 includes a front-end motion tracking module 210 , a back-end mapping module 220 , and a localization module 230 .
- the concurrent odometry and mapping module 250 is configured to output localized and estimated poses to an API module 240 .
- the concurrent odometry and mapping module 250 is configured to track motion to estimate a current pose of the electronic device and update a map of the environment to localize the estimated current pose.
- the concurrent odometry and mapping module 250 is configured to track motion (estimate a pose) at a first, relatively higher rate, and to update a map of the environment to be used to localize the estimated pose at a second, relatively lower rate.
- the front-end motion tracking module 210 is configured to receive visual sensor data 136 from the imaging cameras 114 and 116 , depth data 138 from the depth sensor 120 , and inertial sensor data 142 from the non-image sensors (not shown) of FIG. 1 .
- the front-end motion tracking module 210 generates estimated poses 214 from the received sensor data, and generates feature descriptors 215 of spatial features of objects in the local environment 112 .
- the front-end motion tracking module 210 stores a limited history of tracked motion (e.g., a single prior session, or a single prior time period).
- the front-end motion tracking module 210 estimates a current pose of the electronic device 100 by generating linearization points based on the generated feature descriptors and solving a non-linear estimation of the spatial features based on the linearization points and previously-generated linearization points based on stored limited history of tracked motion.
- the front-end motion tracking module treats any previously-generated estimates of 3D point positions as a set of fixed values. Because the previously-generated linearization points are treated as non-variable, the computational burden of solving the non-linear estimation of the spatial features is lower than it would be if the previously-generated linearization points were treated as variable. However, any errors in the previously-generated linearization points may not be rectified by the solution of the non-linear estimation. Accordingly, the estimated current pose may differ from the actual current position and orientation of the electronic device 100 .
- the front-end motion tracking module 210 updates the estimated pose 214 at a relatively high rate, based on a continuous or high-frequency receipt of sensor data. Based on the sensor data, the front-end motion tracking module 210 is configured to generate an estimated pose 214 that is locally accurate, but subject to global drift.
- the front-end motion tracking module 210 provides the estimated pose 214 to an API module 240 , which is configured to use the estimated pose 214 to generate a virtual reality environment, or portion thereof, representing the local environment of the electronic device 100 .
- the front-end motion tracking module 210 provides the generated feature descriptors 215 to the mapping module 220 .
- the front-end motion tracking module 210 periodically queries the localization module 230 to check for a localized pose 235 .
- the localization module 230 When the localization module 230 has generated a localized pose 235 , the localization module 230 provides the localized pose 235 to the motion tracking module 210 , which provides the localized pose 235 to the API 240 . In some embodiments, the localization module 230 updates the localized pose 235 at a relatively low rate, due to the computational demands of generating the localized pose 235 .
- the mapping module 220 is configured to store a plurality of maps (not shown) including known feature descriptors and to receive generated feature descriptors 215 from the motion tracking module 210 .
- the stored plurality of maps form a compressed history of the environment and tracked motion of the electronic device 100 .
- the mapping module 220 is configured to augment the stored plurality of maps with newly generated tracked motion.
- the mapping module 220 receives generated feature descriptors from the motion tracking module 210 periodically, for example, every five seconds.
- the mapping module 220 receives generated feature descriptors 215 from the front-end motion tracking module 210 after a threshold amount of sensor data has been received by the front-end motion tracking module 210 .
- the mapping module 220 builds a three-dimensional representation of the local environment 112 of the electronic device 100 based on the known feature descriptors of the stored plurality of maps and the generated feature descriptors received from the motion tracking module 210 .
- the mapping module 220 matches the one or more spatial features to spatial features of the plurality of stored maps to generate the three-dimensional representation 225 of the electronic device 100 .
- the mapping module 220 searches each generated feature descriptor 215 to determine any matching known feature descriptors of the stored plurality of maps.
- the mapping module 220 adds the generated feature descriptors received from the motion tracking module 210 by generating estimates of 3D point positions based on the generated feature descriptors and solving a non-linear estimation of the three-dimensional representation based on the device poses and 3D point positions based on the stored feature descriptors and the data from the inertial sensors. In some embodiments, the previously-generated linearization points are considered variable for purposes of solving the non-linear estimation of the three-dimensional representation.
- the mapping module 220 provides the three-dimensional representation 225 of the local environment 112 to the localization module 230 .
- the localization module 230 is configured to use the matched descriptors to align the estimated pose 214 with the stored plurality of maps, such as by applying a loop-closure algorithm.
- the localization module 230 can use matched feature descriptors to estimate a transformation for one or more of the stored plurality of maps, whereby the localization module 230 transforms geometric data associated with the generated feature descriptors of the estimated pose 214 having matching descriptors to be aligned with geometric data associated with a stored map having a corresponding matching descriptor.
- the localization module 230 finds a sufficient number of matching feature descriptors from the generated feature descriptors 215 and a stored map to confirm that the generated feature descriptors 215 and the stored map contain descriptions of common visual landmarks, the localization module 230 performs a transformation between the generated feature descriptors 215 and the matching known feature descriptors, aligning the geometric data of the matching feature descriptors. Thereafter, the localization module 230 can apply a co-optimization algorithm to refine the alignment of the pose and scene of the estimated pose 214 of the electronic device 100 to generate a localized pose 235 .
- FIG. 3 illustrates the components of a front-end motion tracking module 310 of FIGS. 1 and 2 .
- the motion tracking module 310 includes a feature identification module 312 and an environment mapper 320 .
- Each of these modules represents hardware, software, or a combination thereof, configured to execute the operations as described herein.
- the feature identification module 312 is configured to receive imagery 305 , representing images captured by the imaging sensors 114 , 116 , 118 , and the non-visual sensor data 142 . Based on this received data, the feature identification module 312 identifies features in the imagery 305 by generating feature descriptors 315 and comparing the feature descriptors to known feature descriptors from the stored limited history of tracked motion as described above with respect to FIG. 2 .
- the feature identification module 312 provides the generated feature descriptors 315 to the mapping module 220 .
- the feature identification module 312 additionally stores the feature descriptors 315 , together with any associated non-visual data, as localization data 317 .
- the localization data 317 can be used by the electronic device 100 to estimate one or more poses of the electronic device 100 as it is moved through different locations and orientations in its local environment. These estimated poses can be used in conjunction with previously generated and stored map information for the local environment to support or enhance location based services of the electronic device 100 .
- the environment mapper 320 is configured to generate or modify a locally accurate estimated pose 214 of the electronic device 100 based on the localization data 317 .
- the environment mapper 320 analyzes the feature descriptors in the localization data 317 to identify the location of the features in a frame of reference for the electronic device 100 .
- each feature descriptor can include location data indicating a relative position of the corresponding feature from the electronic device 100 .
- the environment mapper 320 generates linearization points based on the localization data 317 and solves a non-linear estimation, such as least squares, of the environment based on the linearization points and previously-generated linearization points based on the stored feature descriptors from the stored limited history of tracked motion.
- the environment mapper 320 estimates the evolution of the device pose over time as well as the positions of 3D points in the environment 112 . To find matching values for these values based on the sensor data, the environment mapper 320 solves a non-linear optimization problem. In some embodiments, the environment mapper 320 solves the non-linear optimization problem by linearizing the problem and applying standard techniques for solving linear systems of equations. In some embodiments, the environment mapper 320 treats the previously-generated linearization points as fixed for purposes of solving the non-linear estimation of the environment. The environment mapper 320 can reconcile the relative positions of the different features to identify the location of each feature in the frame of reference, and store these locations in a locally accurate estimated pose 214 . The front-end motion tracking module 310 provides and updates the estimated pose 214 at a relatively high rate to an API module 240 of the electronic device 100 to, for example, generate a virtual reality display of the local environment.
- the environment mapper 320 is also configured to periodically query the localization module 230 for an updated localized pose 235 .
- the localization module 230 updates the localized pose 235 at a relatively low rate.
- the localization module 230 provides the updated localized pose 235 to the environment mapper 320 .
- the environment mapper 320 provides the updated localized pose 235 to the API module 240 .
- FIG. 4 is a diagram illustrating a back-end mapping module 420 of the concurrent odometry and mapping module 250 of FIG. 2 configured to generate and add to a three-dimensional representation of the environment of the electronic device 100 based on generated feature descriptors 315 and a plurality of stored maps 417 in accordance with at least one embodiment of the present disclosure.
- the back-end mapping module 420 includes a storage module 415 and a feature descriptor matching module 425 .
- the storage module 415 is configured to store a plurality of maps 417 of the environment of the electronic device 100 .
- the plurality of maps 417 may include maps that were previously generated by the electronic device 100 during prior mapping sessions.
- the plurality of maps 417 may also include VR or AR maps that contain features not found in the physical environment of the electronic device 100 .
- the plurality of maps 417 include stored (known) feature descriptors 422 of spatial features of objects in the environment that can collectively be used to generate a three-dimensional representation 225 of the environment.
- the feature descriptor matching module 425 is configured to receive generated feature descriptors 315 from the motion tracking module 210 .
- the feature descriptor matching module 425 compares the generated feature descriptors 315 to the known feature descriptors 422 .
- the feature descriptor matching module 425 builds a three-dimensional representation 225 of the local environment 112 of the electronic device 100 based on the known feature descriptors 422 of the stored plurality of maps 417 and the generated feature descriptors 315 received from the front-end motion tracking module 210 .
- the feature descriptor matching module 425 adds the generated feature descriptors 315 received from the motion tracking module 210 by generating linearization points based on the generated feature descriptors and solving a non-linear estimation of the three-dimensional representation based on the linearization points and previously-generated linearization points based on the known feature descriptors 422 .
- the previously-generated linearization points are considered variable for purposes of solving the non-linear estimation of the three-dimensional representation.
- the feature descriptor matching module 425 provides the three-dimensional representation 225 of the environment to the localization module 230 and, in some embodiments, updates the three-dimensional representation 225 at a relatively low rate.
- the mapping module 420 receives generated feature descriptors 315 from the motion tracking module 210 periodically. In some embodiments, the mapping module 420 receives generated feature descriptors 315 from the front-end motion tracking module 210 at regular intervals of time (e.g., every five seconds). In some embodiments, the mapping module 420 receives generated feature descriptors 315 from the front-end motion tracking module 210 at the conclusion of a mapping session or after a threshold amount of sensor data has been received by the front-end motion tracking module 210 .
- FIG. 5 is a diagram illustrating a localization module 530 of the concurrent odometry and mapping module 250 of FIG. 2 configured to generate a localized pose 235 of the electronic device 100 in accordance with at least one embodiment of the present disclosure.
- the localization module 530 includes a feature descriptor discrepancy detector 515 and a loop closure module 525 .
- the feature descriptor discrepancy detector 515 is configured to receive a three-dimensional representation 225 of the environment from the back-end mapping module 220 of the concurrent odometry and mapping module 250 .
- the feature descriptor discrepancy detector 515 analyses the matched feature descriptors of the three-dimensional representation 225 and identifies discrepancies between matched feature descriptors.
- the feature descriptor discrepancy detector 515 transforms geometric data associated with the generated feature descriptors of the estimated pose 214 having matching descriptors to be aligned with geometric data associated with a stored map having a corresponding matching descriptor.
- the localization module 230 finds a sufficient number of matching feature descriptors from the generated feature descriptors 215 and a stored map to confirm that the generated feature descriptors 215 and the stored map contain descriptions of common visual landmarks, the localization module 230 computes a transformation between the generated feature descriptors 215 and the matching known feature descriptors, aligning the geometric data of the matching feature descriptors.
- the loop closure module 525 is configured to find a matching pose of the device given the 3D position points in the environment and their observations in the current image by solving a co-optimization algorithm to refine the alignment of the matching feature descriptors.
- the co-optimization problem may be solved by a Gauss-Newton or Levenberg-Marquardt algorithm, or another known algorithm for optimizing transformations to generate a localized pose 235 of the electronic device 100 .
- the loop closure module 525 treats known feature descriptors as variable.
- the loop closure module 525 thus generates a localized pose 235 that corrects drift in the estimated pose 214 , and sends the localized pose 235 to the front-end motion tracking module 210 .
- the localized pose 235 can be fed to an application executing at the electronic device 100 to enable augmented reality or other location-based functionality by allowing the electronic device 100 to more efficiently and accurately recognize a local environment 112 that it has previously traversed.
- FIG. 6 is a flow diagram illustrating method 600 of an electronic device to track motion and update a three-dimensional representation of the environment in accordance with at least one embodiment of the present disclosure.
- the method 600 initiates at block 602 where the electronic device 100 captures imagery and non-visual data as it is moved by a user through different poses in a local environment.
- the front-end motion tracking module 210 identifies features of the local environment based on the imagery 305 and non-image sensor data 142 , and generates feature descriptors 215 for the identified features for the back-end mapping module 220 and localization data 317 .
- the motion tracking module 210 uses the localization data 317 to estimate a current pose 214 of the electronic device 100 in the local environment 112 .
- the estimated pose 214 can be used to support location-based functionality for the electronic device 100 .
- the estimated pose 214 can be used to orient a user of the electronic device 100 in a virtual reality or augmented reality application executed at the electronic device 100 .
- the back-end mapping module 220 compares the generated feature descriptors 215 to known feature descriptors of a plurality of stored maps.
- the back-end mapping module 220 builds and/or updates a three-dimensional representation 225 of the environment of the electronic device which it provides to the localization module 230 .
- the localization module 230 identifies discrepancies between matching feature descriptors and performs a loop closure to align the estimated pose 214 with the three-dimensional representation 225 .
- the localization module localizes the current pose of the electronic device, and the concurrent odometry and mapping module 250 provides the localized pose to an API module 240 .
- certain aspects of the techniques described above may implemented by one or more processors of a processing system executing software.
- the software comprises one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium.
- the software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above.
- the non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like.
- the executable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors.
- a computer readable storage medium may include any storage medium, or combination of storage media, accessible by a computer system during use to provide instructions and/or data to the computer system.
- Such storage media can include, but is not limited to, optical media (e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc), magnetic media (e.g., floppy disc, magnetic tape, or magnetic hard drive), volatile memory (e.g., random access memory (RAM) or cache), non-volatile memory (e.g., read-only memory (ROM) or Flash memory), or microelectromechanical systems (MEMS)-based storage media.
- optical media e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc
- magnetic media e.g., floppy disc, magnetic tape, or magnetic hard drive
- volatile memory e.g., random access memory (RAM) or cache
- non-volatile memory e.g., read-only memory (ROM) or Flash memory
- MEMS microelectro
- the computer readable storage medium may be embedded in the computing system (e.g., system RAM or ROM), fixedly attached to the computing system (e.g., a magnetic hard drive), removably attached to the computing system (e.g., an optical disc or Universal Serial Bus (USB)-based Flash memory), or coupled to the computer system via a wired or wireless network (e.g., network accessible storage (NAS)).
- system RAM or ROM system RAM or ROM
- USB Universal Serial Bus
- NAS network accessible storage
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Remote Sensing (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- The present application is related to and claims priority to the following co-pending application, the entirety of which is incorporated by reference herein: U.S. Provisional Patent Application Ser. No. 62/337,987 (Attorney Docket No. 1500-G16013), entitled “Methods and Systems for VR/AR Functionality in a Portable Device.”
- The present disclosure relates generally to imagery capture and processing and more particularly to machine vision using captured imagery.
- Machine vision and display techniques, such as simultaneous localization and mapping (SLAM), visual inertial odometry (VIO), area learning applications, augmented reality (AR), and virtual reality (VR), often rely on the identification of objects within the local environment of a device through the analysis of imagery of the local environment captured by the device. To support these techniques, the device navigates an environment while simultaneously constructing a map of the environment or augmenting an existing map or maps of the environment. However, conventional techniques for tracking motion while building a map of the environment typically take a relatively significant amount of time and resources and accumulate errors, thereby limiting the utility and effectiveness of the machine vision techniques.
- The present disclosure may be better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference symbols in different drawings indicates similar or identical items.
-
FIG. 1 is a diagram illustrating an electronic device configured to estimate a pose of an electronic device in a local environment using pose information generated based on non-image sensor data and image sensor data while building a map of the local environment in accordance with at least one embodiment of the present disclosure. -
FIG. 2 is a diagram illustrating a concurrent odometry and mapping module of the electronic device ofFIG. 1 configured to estimate a current pose of the electronic device and update a map of the environment to localize the estimated current pose in accordance with at least one embodiment of the present disclosure. -
FIG. 3 is a diagram illustrating a motion tracking module of the concurrent odometry and mapping module ofFIG. 2 configured to generate feature descriptors of spatial features of objects in the environment for updating a map of the environment and estimating a pose of the electronic device in accordance with at least one embodiment of the present disclosure. -
FIG. 4 is a diagram illustrating a mapping module of the concurrent odometry and mapping module ofFIG. 2 configured to generate and add to a three-dimensional representation of the environment of the electronic device based on generated feature descriptors and a plurality of stored maps in accordance with at least one embodiment of the present disclosure. -
FIG. 5 is a diagram illustrating a localization module of the concurrent odometry and mapping module ofFIG. 2 configured to generate a localized pose of the electronic device in accordance with at least one embodiment of the present disclosure. -
FIG. 6 is a flow diagram illustrating an operation of an electronic device to track motion and update a three-dimensional representation of the environment in accordance with at least one embodiment of the present disclosure. - The following description is intended to convey a thorough understanding of the present disclosure by providing a number of specific embodiments and details involving the determination of a relative position or relative orientation of an electronic device based on image-based identification of objects in a local environment of the electronic device. It is understood, however, that the present disclosure is not limited to these specific embodiments and details, which are examples only, and the scope of the disclosure is accordingly intended to be limited only by the following claims and equivalents thereof. It is further understood that one possessing ordinary skill in the art, in light of known systems and methods, would appreciate the use of the disclosure for its intended purposes and benefits in any number of alternative embodiments, depending upon specific design and other needs.
-
FIGS. 1-6 illustrate various techniques for tracking motion of an electronic device in an environment while building a three-dimensional visual representation of the environment that is used to correct drift in the tracked motion. A front-end motion tracking module receives sensor data from visual, inertial, and depth sensors and tracks motion (i.e., estimates poses over time) of the electronic device that can be used by an application programming interface (API). The front-end motion tracking module estimates poses over time based on feature descriptors corresponding to the visual appearance of spatial features of objects in the environment and estimates the three-dimensional positions (referred to as 3D point positions) of the spatial features. The front-end motion tracking module also provides the captured feature descriptors and estimated device pose to a back-end mapping module. The back-end mapping module is configured to store a plurality of maps based on stored feature descriptors, and to periodically receive additional feature descriptors and estimated device poses from the front-end motion tracking module as they are generated while the electronic device moves through the environment. The back-end mapping module builds a three-dimensional visual representation (map) of the environment based on the stored plurality of maps and the received feature descriptors. The back-end mapping module provides the three-dimensional visual representation of the environment to a localization module, which compares generated feature descriptors to stored feature descriptors from the stored plurality of maps, and identifies correspondences between stored and observed feature descriptors. The localization module performs a loop closure by minimizing the discrepancies between matching feature descriptors to compute a localized pose. The localized pose corrects drift in the estimated pose generated by the front-end motion tracking module, and is periodically sent to the front-end motion tracking module for output to the API. - By separately tracking motion based on visual and inertial sensor data and building a three-dimensional representation of the environment based on a plurality of stored maps as well as periodically updated generated feature descriptors, and correcting drift in the tracked motion by performing a loop closure between the generated feature descriptors and the three-dimensional representation, the electronic device can perform highly accurate motion tracking and map-building of an environment even with constrained resources, allowing the electronic device to record a representation of the environment and therefore recognize re-visits to the same environment over multiple sessions. To illustrate, in at least one embodiment the front-end motion tracking module maintains only a limited history of tracked motion (e.g., tracked motion data for only a single prior session, or a single prior time period) and treats any previously-generated feature point position estimates as fixed, thus limiting the computational burden of calculating an estimated current pose and 3D point positions and thus enabling the front-end tracking module to update the estimated current pose at a relatively high rate. The back-end mapping module maintains a more extensive history of the 3D point positions in the environment and poses of the electronic device, thus enabling the back-end mapping module to build a more accurate three-dimensional representation of the environment based on the stored maps and the observed feature descriptors received from the front-end motion tracking module. Because the back-end mapping module carries a heavier computational burden to build the three-dimensional representation of the environment based on a plurality of stored maps and to update the three-dimensional representation based on periodic inputs of additional generated feature descriptors from the front-end motion tracking module, the back-end mapping module updates the three-dimensional representation of the environment at a relatively slow rate. In addition, the localization module optimizes the three-dimensional representation and estimated current pose by solving a co-optimization algorithm that treats previously-generated 3D point positions as variable. The localization module thus corrects drift in the estimated current pose to generate a localized pose, and sends the localized pose to the front-end motion tracking module for output to the API.
-
FIG. 1 illustrates anelectronic device 100 configured to support location-based functionality, such as SLAM, VR, or AR, using image and non-visual sensor data in accordance with at least one embodiment of the present disclosure. Theelectronic device 100 can include a user-portable mobile device, such as a tablet computer, computing-enabled cellular phone (e.g., a “smartphone”), a head-mounted display (HMD), a notebook computer, a personal digital assistant (PDA), a gaming system remote, a television remote, and the like. In other embodiments, theelectronic device 100 can include another type of mobile device, such as an automobile, robot, remote-controlled drone or other airborne device, and the like. For ease of illustration, theelectronic device 100 is generally described herein in the example context of a mobile device, such as a tablet computer or a smartphone; however, theelectronic device 100 is not limited to these example implementations. - In the depicted example, the
electronic device 100 includes ahousing 102 having asurface 104 opposite anothersurface 106. In the example, thin rectangular block form-factor depicted, thesurfaces housing 102 further includes four side surfaces (top, bottom, left, and right) between thesurface 104 andsurface 106. Thehousing 102 may be implemented in many other form factors, and thesurfaces electronic device 100 includes adisplay 108 disposed at thesurface 106 for presenting visual information to auser 110. Accordingly, for ease of reference, thesurface 106 is referred to herein as the “forward-facing” surface and thesurface 104 is referred to herein as the “user-facing” surface as a reflection of this example orientation of theelectronic device 100 relative to theuser 110, although the orientation of these surfaces is not limited by these relational designations. - The
electronic device 100 includes a plurality of sensors to obtain information regarding alocal environment 112 of theelectronic device 100. Theelectronic device 100 obtains visual information (imagery) for thelocal environment 112 viaimaging sensors depth sensor 120 disposed at the forward-facingsurface 106 and animaging sensor 118 disposed at the user-facingsurface 104. In one embodiment, theimaging sensor 114 is implemented as a wide-angle imaging sensor having a fish-eye lens or other wide-angle lens to provide a wider-angle view of thelocal environment 112 facing thesurface 106. Theimaging sensor 116 is implemented as a narrow-angle imaging sensor having a typical angle of view lens to provide a narrower angle view of thelocal environment 112 facing thesurface 106. Accordingly, theimaging sensor 114 and theimaging sensor 116 are also referred to herein as the “wide-angle imaging sensor 114” and the “narrow-angle imaging sensor 116,” respectively. As described in greater detail below, the wide-angle imaging sensor 114 and the narrow-angle imaging sensor 116 can be positioned and oriented on the forward-facingsurface 106 such that their fields of view overlap starting at a specified distance from theelectronic device 100, thereby enabling depth sensing of objects in thelocal environment 112 that are positioned in the region of overlapping fields of view via image analysis. Theimaging sensor 118 can be used to capture image data for thelocal environment 112 facing thesurface 104. Further, in some embodiments, theimaging sensor 118 is configured for tracking the movements of thehead 122 or for facial recognition, and thus providing head tracking information that may be used to adjust a view perspective of imagery presented via thedisplay 108. - The
depth sensor 120, in one embodiment, uses amodulated light projector 119 to project modulated light patterns from the forward-facingsurface 106 into the local environment, and uses one or both ofimaging sensors local environment 112. These modulated light patterns can be either spatially-modulated light patterns or temporally-modulated light patterns. The captured reflections of the modulated light patterns are referred to herein as “depth imagery.” Thedepth sensor 120 then may calculate the depths of the objects, that is, the distances of the objects from theelectronic device 100, based on the analysis of the depth imagery. The resulting depth data obtained from thedepth sensor 120 may be used to calibrate or otherwise augment depth information obtained from image analysis (e.g., stereoscopic analysis) of the image data captured by theimaging sensors depth sensor 120 may be used in place of depth information obtained from image analysis. To illustrate, multiview analysis typically is more suited for bright lighting conditions and when the objects are relatively distant, whereas modulated light-based depth sensing is better suited for lower light conditions or when the observed objects are relatively close (e.g., within 4-5 meters). Thus, when theelectronic device 100 senses that it is outdoors or otherwise in relatively good lighting conditions, theelectronic device 100 may elect to use multiview-based reconstruction to determine object depths. Conversely, when theelectronic device 100 senses that it is indoors or otherwise in relatively poor lighting conditions, theelectronic device 100 may switch to using modulated light-based depth sensing via thedepth sensor 120. - The
electronic device 100 also may rely on non-visual pose information for pose detection. This non-visual pose information can be obtained by theelectronic device 100 via one or more non-visual sensors (not shown inFIG. 1 ), such as an IMU including one or more gyroscopes, magnetometers, and accelerometers. In at least one embodiment, the IMU can be employed to generate pose information along multiple axes of motion, including translational axes, expressed as X, Y, and Z axes of a frame of reference for theelectronic device 100, and rotational axes, expressed as roll, pitch, and yaw axes of the frame of reference for theelectronic device 100. The non-visual sensors can also include ambient light sensors and location sensors, such as GPS sensors, or other sensors that can be used to identify a location of theelectronic device 100, such as one or more wireless radios, cellular radios, and the like. - To facilitate drift-free, highly accurate motion tracking that can run on a resource-constrained mobile device, the
electronic device 100 includes a concurrent odometry andmapping module 150 to track motion of theelectronic device 100 based on theimage sensor data non-image sensor data 142 and to build a three-dimensional representation of thelocal environment 112. The concurrent odometry andmapping module 150 periodically updates the three-dimensional representation of thelocal environment 112 with feature descriptors generated based on the image sensor data and the non-visual sensor data. The concurrent odometry andmapping module 150 uses the updated three-dimensional representation of thelocal environment 112 to correct drift and other pose errors in the tracked motion. - In operation, the
electronic device 100 uses the image sensor data and the non-visual sensor data to track motion (estimate a pose) of theelectronic device 100. In at least one embodiment, after a reset theelectronic device 100 determines an initial estimated pose based on geolocation data, other non-visual sensor data, visual sensor data as described further below, or a combination thereof. As the pose of theelectronic device 100 changes, the non-visual sensors generate, at a relatively high rate, non-visual pose information reflecting the changes in the device pose. Concurrently, the visual sensors capture images that also reflect device pose changes. Based on this non-visual and visual pose information, theelectronic device 100 updates the initial estimated pose to reflect a current estimated pose, or tracked motion, of the device. - The
electronic device 100 generates visual pose information based on the detection of spatial features in image data captured by one or more of theimaging sensors FIG. 1 thelocal environment 112 includes a hallway of an office building that includes threecorners baseboard 130, and anelectrical outlet 132. Theuser 110 has positioned and oriented theelectronic device 100 so that the forward-facingimaging sensors sensor image data 134 and narrow angle imagingsensor image data 136, respectively, that includes these spatial features of the hallway. In this example, thedepth sensor 120 also capturesdepth data 138 that reflects the relative distances of these spatial features relative to the current pose of theelectronic device 100. Further, the user-facingimaging sensor 118 captures image data representinghead tracking data 140 for the current pose of thehead 122 of theuser 110.Non-visual sensor data 142, such as readings from the IMU, also is collected by theelectronic device 100 in its current pose. - From this input data, the
electronic device 100 can determine an estimate of its relative pose, or tracked motion, without explicit absolute localization information from an external source. To illustrate, theelectronic device 100 can perform analysis of the wide-angle imagingsensor image data 134 and the narrow-angle imagingsensor image data 136 to determine the distances between theelectronic device 100 and thecorners depth data 138 obtained from thedepth sensor 120 can be used to determine the distances of the spatial features. From these distances theelectronic device 100 can triangulate or otherwise infer its relative position in the office represented by thelocal environment 112. As another example, theelectronic device 100 can identify spatial features present in one set of captured images of theimage data electronic device 100 in a free frame of reference. In this approach, certain non-visual sensor data, such as gyroscopic data or accelerometer data, can be used to correlate spatial features observed in one image with spatial features observed in a subsequent image. - In at least one embodiment, the
electronic device 100 uses the image data and the non-visual data to generate feature descriptors for the spatial features identified in the captured imagery. Each of the generated feature descriptors describes the orientation, gravity direction, scale, and other aspects of one or more of the identified spatial features. The generated feature descriptors are compared to a set of stored descriptors (referred to for purposes of description as “known feature descriptors”) of a plurality of stored maps of thelocal environment 112 that each identifies previously identified spatial features and their corresponding poses. In at least one embodiment, each of the known feature descriptors is a descriptor that has previously been generated, and its pose definitively established, by either theelectronic device 100 or another electronic device. The estimated device poses, 3D point positions, and known feature descriptors can be stored at theelectronic device 100, at a remote server (which can combine data from multiple electronic devices) or other storage device, or a combination thereof. Accordingly, the comparison of the generated feature descriptors can be performed at theelectronic device 100, at the remote server or other device, or a combination thereof. - In at least one embodiment, a generated feature descriptor is compared to a known feature descriptor by comparing each aspect of the generated feature descriptor (e.g., the orientation of the corresponding feature, the scale of the corresponding feature, and the like) to the corresponding aspect of the known feature descriptor and determining an error value indicating the variance between the compared features. Thus, for example, if the orientation of feature in the generated feature descriptor is identified by a vector A, and the orientation of the feature in the known feature descriptor is identified by a vector B, the
electronic device 100 can identify an error value for the orientation aspect of the feature descriptors by calculating the difference between the vectors A and B. The error values can be combined according to a specified statistical technique, such as a least squares technique, to identify a combined error value for each known feature descriptor being compared, and the matching known feature descriptor identifies as the known feature descriptor having the smallest combined error value. - Each of the known feature descriptors includes one or more fields identifying the point position of the corresponding spatial feature and camera poses from which the corresponding spatial feature was seen. Thus, a known feature descriptor can include pose information indicating the location of the spatial feature within a specified coordinate system (e.g., a geographic coordinate system representing Earth) within a specified resolution (e.g., 1 cm), the orientation of the point of view of the spatial feature, the distance of the point of view from the feature and the like. The observed feature descriptors are compared to the feature descriptors stored in the map to identify multiple matched known feature descriptors. The matched known feature descriptors are then stored together with non-visual pose data as localization data that can be used both to correct drift in the tracked motion (or estimated pose) of the
electronic device 100 and to augment the plurality of stored maps of a local environment for theelectronic device 100. - In some scenarios, the matching process will identify multiple known feature descriptors that match corresponding observed feature descriptors, thus indicating that there are multiple features in the local environment of the
electronic device 100 that have previously been identified. The corresponding positions of the matching known feature descriptors may vary, indicating that theelectronic device 100 is not in a particular one of the poses indicated by the matching known feature descriptors. Accordingly, theelectronic device 100 may refine its estimated pose by interpolating its pose between the poses indicated by the matching known feature descriptors and the pose computed using conventional interpolation techniques. In some scenarios, if the error/difference between matching known feature descriptors and the computed online estimate is above a threshold, theelectronic device 100 may snap its estimated pose to the pose indicated by the known feature descriptors. - In at least one embodiment, the concurrent odometry and
mapping module 150 generates estimated poses (i.e., tracks motion) of theelectronic device 100 at a relatively high rate based on theimage sensor data non-image sensor data 142 for output to an API. The concurrent odometry andmapping module 150 also generates feature descriptors based on the image sensor data and the non-visual sensor data. The concurrent odometry andmapping module 150 stores a plurality of maps containing known feature descriptors, from which it builds a three-dimensional representation of thelocal environment 112. The concurrent odometry andmapping module 150 uses the known feature descriptors to map the local environment. For example, the concurrent odometry andmapping module 150 can use the known feature descriptors to generate a map file that indicates the position of each feature included in the known feature descriptors in a frame of reference for theelectronic device 100. As the concurrent odometry andmapping module 150 generates new feature descriptors based on the image sensor data and the non-visual sensor data, it periodically augments the three-dimensional representation of thelocal environment 112 by matching the generated feature descriptors to the known feature descriptors. The concurrent odometry andmapping module 150 uses the three-dimensional representation of theenvironment 112 to periodically correct drift in the tracked motion. In this manner, the concurrent odometry andmapping module 150 generates a locally accurate estimated pose for output to the API at a relatively high frequency, and periodically corrects global drift in the estimated pose to generate a localized pose using the three-dimensional representation of thelocal environment 112. The estimated and localized poses can be used to support any of a variety of location-based services. For example, in one embodiment the estimated and localized poses can be used to generate a virtual reality environment, or portion thereof, representing the local environment of theelectronic device 100. -
FIG. 2 illustrates the components of a concurrent odometry andmapping module 250 of theelectronic device 100 ofFIG. 1 The concurrent odometry andmapping module 250 includes a front-endmotion tracking module 210, a back-end mapping module 220, and alocalization module 230. The concurrent odometry andmapping module 250 is configured to output localized and estimated poses to anAPI module 240. The concurrent odometry andmapping module 250 is configured to track motion to estimate a current pose of the electronic device and update a map of the environment to localize the estimated current pose. In some embodiments, the concurrent odometry andmapping module 250 is configured to track motion (estimate a pose) at a first, relatively higher rate, and to update a map of the environment to be used to localize the estimated pose at a second, relatively lower rate. - The front-end
motion tracking module 210 is configured to receivevisual sensor data 136 from theimaging cameras depth data 138 from thedepth sensor 120, andinertial sensor data 142 from the non-image sensors (not shown) ofFIG. 1 . The front-endmotion tracking module 210 generates estimatedposes 214 from the received sensor data, and generatesfeature descriptors 215 of spatial features of objects in thelocal environment 112. In some embodiments, the front-endmotion tracking module 210 stores a limited history of tracked motion (e.g., a single prior session, or a single prior time period). In some embodiments, the front-endmotion tracking module 210 estimates a current pose of theelectronic device 100 by generating linearization points based on the generated feature descriptors and solving a non-linear estimation of the spatial features based on the linearization points and previously-generated linearization points based on stored limited history of tracked motion. In some embodiments, for purposes of solving the non-linear estimation of the spatial features, the front-end motion tracking module treats any previously-generated estimates of 3D point positions as a set of fixed values. Because the previously-generated linearization points are treated as non-variable, the computational burden of solving the non-linear estimation of the spatial features is lower than it would be if the previously-generated linearization points were treated as variable. However, any errors in the previously-generated linearization points may not be rectified by the solution of the non-linear estimation. Accordingly, the estimated current pose may differ from the actual current position and orientation of theelectronic device 100. - In some embodiments, the front-end
motion tracking module 210 updates the estimated pose 214 at a relatively high rate, based on a continuous or high-frequency receipt of sensor data. Based on the sensor data, the front-endmotion tracking module 210 is configured to generate an estimatedpose 214 that is locally accurate, but subject to global drift. The front-endmotion tracking module 210 provides the estimated pose 214 to anAPI module 240, which is configured to use the estimated pose 214 to generate a virtual reality environment, or portion thereof, representing the local environment of theelectronic device 100. The front-endmotion tracking module 210 provides the generatedfeature descriptors 215 to themapping module 220. The front-endmotion tracking module 210 periodically queries thelocalization module 230 to check for alocalized pose 235. When thelocalization module 230 has generated alocalized pose 235, thelocalization module 230 provides thelocalized pose 235 to themotion tracking module 210, which provides thelocalized pose 235 to theAPI 240. In some embodiments, thelocalization module 230 updates thelocalized pose 235 at a relatively low rate, due to the computational demands of generating thelocalized pose 235. - The
mapping module 220 is configured to store a plurality of maps (not shown) including known feature descriptors and to receive generatedfeature descriptors 215 from themotion tracking module 210. The stored plurality of maps form a compressed history of the environment and tracked motion of theelectronic device 100. Themapping module 220 is configured to augment the stored plurality of maps with newly generated tracked motion. In some embodiments, themapping module 220 receives generated feature descriptors from themotion tracking module 210 periodically, for example, every five seconds. In some embodiments, themapping module 220 receives generatedfeature descriptors 215 from the front-endmotion tracking module 210 after a threshold amount of sensor data has been received by the front-endmotion tracking module 210. Themapping module 220 builds a three-dimensional representation of thelocal environment 112 of theelectronic device 100 based on the known feature descriptors of the stored plurality of maps and the generated feature descriptors received from themotion tracking module 210. Themapping module 220 matches the one or more spatial features to spatial features of the plurality of stored maps to generate the three-dimensional representation 225 of theelectronic device 100. In some embodiments, themapping module 220 searches each generatedfeature descriptor 215 to determine any matching known feature descriptors of the stored plurality of maps. - In some embodiments, the
mapping module 220 adds the generated feature descriptors received from themotion tracking module 210 by generating estimates of 3D point positions based on the generated feature descriptors and solving a non-linear estimation of the three-dimensional representation based on the device poses and 3D point positions based on the stored feature descriptors and the data from the inertial sensors. In some embodiments, the previously-generated linearization points are considered variable for purposes of solving the non-linear estimation of the three-dimensional representation. Themapping module 220 provides the three-dimensional representation 225 of thelocal environment 112 to thelocalization module 230. - The
localization module 230 is configured to use the matched descriptors to align the estimated pose 214 with the stored plurality of maps, such as by applying a loop-closure algorithm. Thus, thelocalization module 230 can use matched feature descriptors to estimate a transformation for one or more of the stored plurality of maps, whereby thelocalization module 230 transforms geometric data associated with the generated feature descriptors of the estimated pose 214 having matching descriptors to be aligned with geometric data associated with a stored map having a corresponding matching descriptor. When thelocalization module 230 finds a sufficient number of matching feature descriptors from the generatedfeature descriptors 215 and a stored map to confirm that the generatedfeature descriptors 215 and the stored map contain descriptions of common visual landmarks, thelocalization module 230 performs a transformation between the generatedfeature descriptors 215 and the matching known feature descriptors, aligning the geometric data of the matching feature descriptors. Thereafter, thelocalization module 230 can apply a co-optimization algorithm to refine the alignment of the pose and scene of the estimated pose 214 of theelectronic device 100 to generate alocalized pose 235. -
FIG. 3 illustrates the components of a front-endmotion tracking module 310 ofFIGS. 1 and 2 . Themotion tracking module 310 includes afeature identification module 312 and anenvironment mapper 320. Each of these modules represents hardware, software, or a combination thereof, configured to execute the operations as described herein. In particular, thefeature identification module 312 is configured to receiveimagery 305, representing images captured by theimaging sensors non-visual sensor data 142. Based on this received data, thefeature identification module 312 identifies features in theimagery 305 by generatingfeature descriptors 315 and comparing the feature descriptors to known feature descriptors from the stored limited history of tracked motion as described above with respect toFIG. 2 . Thefeature identification module 312 provides the generatedfeature descriptors 315 to themapping module 220. Thefeature identification module 312 additionally stores thefeature descriptors 315, together with any associated non-visual data, aslocalization data 317. In at least one embodiment, thelocalization data 317 can be used by theelectronic device 100 to estimate one or more poses of theelectronic device 100 as it is moved through different locations and orientations in its local environment. These estimated poses can be used in conjunction with previously generated and stored map information for the local environment to support or enhance location based services of theelectronic device 100. - The
environment mapper 320 is configured to generate or modify a locally accurate estimated pose 214 of theelectronic device 100 based on thelocalization data 317. In particular, theenvironment mapper 320 analyzes the feature descriptors in thelocalization data 317 to identify the location of the features in a frame of reference for theelectronic device 100. For example, each feature descriptor can include location data indicating a relative position of the corresponding feature from theelectronic device 100. In some embodiments, theenvironment mapper 320 generates linearization points based on thelocalization data 317 and solves a non-linear estimation, such as least squares, of the environment based on the linearization points and previously-generated linearization points based on the stored feature descriptors from the stored limited history of tracked motion. Theenvironment mapper 320 estimates the evolution of the device pose over time as well as the positions of 3D points in theenvironment 112. To find matching values for these values based on the sensor data, theenvironment mapper 320 solves a non-linear optimization problem. In some embodiments, theenvironment mapper 320 solves the non-linear optimization problem by linearizing the problem and applying standard techniques for solving linear systems of equations. In some embodiments, theenvironment mapper 320 treats the previously-generated linearization points as fixed for purposes of solving the non-linear estimation of the environment. Theenvironment mapper 320 can reconcile the relative positions of the different features to identify the location of each feature in the frame of reference, and store these locations in a locally accurate estimatedpose 214. The front-endmotion tracking module 310 provides and updates the estimated pose 214 at a relatively high rate to anAPI module 240 of theelectronic device 100 to, for example, generate a virtual reality display of the local environment. - The
environment mapper 320 is also configured to periodically query thelocalization module 230 for an updatedlocalized pose 235. In some embodiments, thelocalization module 230 updates thelocalized pose 235 at a relatively low rate. When an updatedlocalized pose 235 is available, thelocalization module 230 provides the updated localized pose 235 to theenvironment mapper 320. Theenvironment mapper 320 provides the updated localized pose 235 to theAPI module 240. -
FIG. 4 is a diagram illustrating a back-end mapping module 420 of the concurrent odometry andmapping module 250 ofFIG. 2 configured to generate and add to a three-dimensional representation of the environment of theelectronic device 100 based on generatedfeature descriptors 315 and a plurality of storedmaps 417 in accordance with at least one embodiment of the present disclosure. The back-end mapping module 420 includes astorage module 415 and a featuredescriptor matching module 425. - The
storage module 415 is configured to store a plurality ofmaps 417 of the environment of theelectronic device 100. In some embodiments, the plurality ofmaps 417 may include maps that were previously generated by theelectronic device 100 during prior mapping sessions. In some embodiments, the plurality ofmaps 417 may also include VR or AR maps that contain features not found in the physical environment of theelectronic device 100. The plurality ofmaps 417 include stored (known)feature descriptors 422 of spatial features of objects in the environment that can collectively be used to generate a three-dimensional representation 225 of the environment. - The feature
descriptor matching module 425 is configured to receive generatedfeature descriptors 315 from themotion tracking module 210. The featuredescriptor matching module 425 compares the generatedfeature descriptors 315 to the knownfeature descriptors 422. The featuredescriptor matching module 425 builds a three-dimensional representation 225 of thelocal environment 112 of theelectronic device 100 based on the knownfeature descriptors 422 of the stored plurality ofmaps 417 and the generatedfeature descriptors 315 received from the front-endmotion tracking module 210. - In some embodiments, the feature
descriptor matching module 425 adds the generatedfeature descriptors 315 received from themotion tracking module 210 by generating linearization points based on the generated feature descriptors and solving a non-linear estimation of the three-dimensional representation based on the linearization points and previously-generated linearization points based on the knownfeature descriptors 422. In some embodiments, the previously-generated linearization points are considered variable for purposes of solving the non-linear estimation of the three-dimensional representation. The featuredescriptor matching module 425 provides the three-dimensional representation 225 of the environment to thelocalization module 230 and, in some embodiments, updates the three-dimensional representation 225 at a relatively low rate. - The
mapping module 420 receives generatedfeature descriptors 315 from themotion tracking module 210 periodically. In some embodiments, themapping module 420 receives generatedfeature descriptors 315 from the front-endmotion tracking module 210 at regular intervals of time (e.g., every five seconds). In some embodiments, themapping module 420 receives generatedfeature descriptors 315 from the front-endmotion tracking module 210 at the conclusion of a mapping session or after a threshold amount of sensor data has been received by the front-endmotion tracking module 210. -
FIG. 5 is a diagram illustrating alocalization module 530 of the concurrent odometry andmapping module 250 ofFIG. 2 configured to generate alocalized pose 235 of theelectronic device 100 in accordance with at least one embodiment of the present disclosure. Thelocalization module 530 includes a featuredescriptor discrepancy detector 515 and aloop closure module 525. - The feature
descriptor discrepancy detector 515 is configured to receive a three-dimensional representation 225 of the environment from the back-end mapping module 220 of the concurrent odometry andmapping module 250. The featuredescriptor discrepancy detector 515 analyses the matched feature descriptors of the three-dimensional representation 225 and identifies discrepancies between matched feature descriptors. The featuredescriptor discrepancy detector 515 transforms geometric data associated with the generated feature descriptors of the estimated pose 214 having matching descriptors to be aligned with geometric data associated with a stored map having a corresponding matching descriptor. When thelocalization module 230 finds a sufficient number of matching feature descriptors from the generatedfeature descriptors 215 and a stored map to confirm that the generatedfeature descriptors 215 and the stored map contain descriptions of common visual landmarks, thelocalization module 230 computes a transformation between the generatedfeature descriptors 215 and the matching known feature descriptors, aligning the geometric data of the matching feature descriptors. - The
loop closure module 525 is configured to find a matching pose of the device given the 3D position points in the environment and their observations in the current image by solving a co-optimization algorithm to refine the alignment of the matching feature descriptors. The co-optimization problem may be solved by a Gauss-Newton or Levenberg-Marquardt algorithm, or another known algorithm for optimizing transformations to generate alocalized pose 235 of theelectronic device 100. In some embodiments, theloop closure module 525 treats known feature descriptors as variable. Theloop closure module 525 thus generates alocalized pose 235 that corrects drift in the estimatedpose 214, and sends thelocalized pose 235 to the front-endmotion tracking module 210. The localized pose 235 can be fed to an application executing at theelectronic device 100 to enable augmented reality or other location-based functionality by allowing theelectronic device 100 to more efficiently and accurately recognize alocal environment 112 that it has previously traversed. -
FIG. 6 is a flowdiagram illustrating method 600 of an electronic device to track motion and update a three-dimensional representation of the environment in accordance with at least one embodiment of the present disclosure. Themethod 600 initiates atblock 602 where theelectronic device 100 captures imagery and non-visual data as it is moved by a user through different poses in a local environment. Atblock 604, the front-endmotion tracking module 210 identifies features of the local environment based on theimagery 305 andnon-image sensor data 142, and generatesfeature descriptors 215 for the identified features for the back-end mapping module 220 andlocalization data 317. Atblock 606, themotion tracking module 210 uses thelocalization data 317 to estimate acurrent pose 214 of theelectronic device 100 in thelocal environment 112. The estimated pose 214 can be used to support location-based functionality for theelectronic device 100. For example, the estimated pose 214 can be used to orient a user of theelectronic device 100 in a virtual reality or augmented reality application executed at theelectronic device 100. - At
block 608, the back-end mapping module 220 compares the generatedfeature descriptors 215 to known feature descriptors of a plurality of stored maps. Atblock 610, the back-end mapping module 220 builds and/or updates a three-dimensional representation 225 of the environment of the electronic device which it provides to thelocalization module 230. Atblock 612, thelocalization module 230 identifies discrepancies between matching feature descriptors and performs a loop closure to align the estimated pose 214 with the three-dimensional representation 225. Atblock 614, the localization module localizes the current pose of the electronic device, and the concurrent odometry andmapping module 250 provides the localized pose to anAPI module 240. - In some embodiments, certain aspects of the techniques described above may implemented by one or more processors of a processing system executing software. The software comprises one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium. The software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. The non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like. The executable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors.
- A computer readable storage medium may include any storage medium, or combination of storage media, accessible by a computer system during use to provide instructions and/or data to the computer system. Such storage media can include, but is not limited to, optical media (e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc), magnetic media (e.g., floppy disc, magnetic tape, or magnetic hard drive), volatile memory (e.g., random access memory (RAM) or cache), non-volatile memory (e.g., read-only memory (ROM) or Flash memory), or microelectromechanical systems (MEMS)-based storage media. The computer readable storage medium may be embedded in the computing system (e.g., system RAM or ROM), fixedly attached to the computing system (e.g., a magnetic hard drive), removably attached to the computing system (e.g., an optical disc or Universal Serial Bus (USB)-based Flash memory), or coupled to the computer system via a wired or wireless network (e.g., network accessible storage (NAS)).
- Note that not all of the activities or elements described above in the general description are required, that a portion of a specific activity or device may not be required, and that one or more further activities may be performed, or elements included, in addition to those described. Still further, the order in which activities are listed are not necessarily the order in which they are performed. Also, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure.
- Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature of any or all the claims. Moreover, the particular embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the claims below.
Claims (13)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/224,414 US20230360242A1 (en) | 2016-05-18 | 2023-07-20 | System and method for concurrent odometry and mapping |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662337987P | 2016-05-18 | 2016-05-18 | |
US15/595,617 US10802147B2 (en) | 2016-05-18 | 2017-05-15 | System and method for concurrent odometry and mapping |
US16/875,488 US11734846B2 (en) | 2016-05-18 | 2020-05-15 | System and method for concurrent odometry and mapping |
US18/224,414 US20230360242A1 (en) | 2016-05-18 | 2023-07-20 | System and method for concurrent odometry and mapping |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/875,488 Division US11734846B2 (en) | 2016-05-18 | 2020-05-15 | System and method for concurrent odometry and mapping |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230360242A1 true US20230360242A1 (en) | 2023-11-09 |
Family
ID=59071067
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/595,617 Active US10802147B2 (en) | 2016-05-18 | 2017-05-15 | System and method for concurrent odometry and mapping |
US16/875,488 Active 2038-04-04 US11734846B2 (en) | 2016-05-18 | 2020-05-15 | System and method for concurrent odometry and mapping |
US18/224,414 Pending US20230360242A1 (en) | 2016-05-18 | 2023-07-20 | System and method for concurrent odometry and mapping |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/595,617 Active US10802147B2 (en) | 2016-05-18 | 2017-05-15 | System and method for concurrent odometry and mapping |
US16/875,488 Active 2038-04-04 US11734846B2 (en) | 2016-05-18 | 2020-05-15 | System and method for concurrent odometry and mapping |
Country Status (4)
Country | Link |
---|---|
US (3) | US10802147B2 (en) |
EP (1) | EP3458941A1 (en) |
CN (2) | CN108700947B (en) |
WO (1) | WO2017201282A1 (en) |
Families Citing this family (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9766074B2 (en) | 2008-03-28 | 2017-09-19 | Regents Of The University Of Minnesota | Vision-aided inertial navigation |
US10012504B2 (en) | 2014-06-19 | 2018-07-03 | Regents Of The University Of Minnesota | Efficient vision-aided inertial navigation using a rolling-shutter camera with inaccurate timestamps |
US10802147B2 (en) * | 2016-05-18 | 2020-10-13 | Google Llc | System and method for concurrent odometry and mapping |
US11466990B2 (en) * | 2016-07-22 | 2022-10-11 | Regents Of The University Of Minnesota | Square-root multi-state constraint Kalman filter for vision-aided inertial navigation system |
CN107065195B (en) * | 2017-06-02 | 2023-05-02 | 那家全息互动(深圳)有限公司 | Modularized MR equipment imaging method |
US11062517B2 (en) | 2017-09-27 | 2021-07-13 | Fisher-Rosemount Systems, Inc. | Virtual access to a limited-access object |
JP6719494B2 (en) * | 2018-02-07 | 2020-07-08 | 直之 村上 | A method of calculating the three-dimensional drive numerical value of the control device of the three-dimensional numerical drive by the drive measurement of the tracking laser distance measuring device. |
CN108876854B (en) * | 2018-04-27 | 2022-03-08 | 腾讯科技(深圳)有限公司 | Method, device and equipment for relocating camera attitude tracking process and storage medium |
US11940277B2 (en) | 2018-05-29 | 2024-03-26 | Regents Of The University Of Minnesota | Vision-aided inertial navigation system for ground vehicle localization |
US11227435B2 (en) | 2018-08-13 | 2022-01-18 | Magic Leap, Inc. | Cross reality system |
US10957112B2 (en) | 2018-08-13 | 2021-03-23 | Magic Leap, Inc. | Cross reality system |
US11244509B2 (en) * | 2018-08-20 | 2022-02-08 | Fisher-Rosemount Systems, Inc. | Drift correction for industrial augmented reality applications |
WO2020072985A1 (en) | 2018-10-05 | 2020-04-09 | Magic Leap, Inc. | Rendering location specific virtual content in any location |
KR102627453B1 (en) * | 2018-10-17 | 2024-01-19 | 삼성전자주식회사 | Method and device to estimate position |
US11073407B2 (en) * | 2018-10-19 | 2021-07-27 | Htc Corporation | Electronic device and pose-calibration method thereof |
CN111951325B (en) * | 2019-05-14 | 2024-01-12 | 虹软科技股份有限公司 | Pose tracking method, pose tracking device and electronic equipment |
US11010921B2 (en) * | 2019-05-16 | 2021-05-18 | Qualcomm Incorporated | Distributed pose estimation |
US10832417B1 (en) * | 2019-06-04 | 2020-11-10 | International Business Machines Corporation | Fusion of visual-inertial-odometry and object tracker for physically anchored augmented reality |
JP2022551735A (en) | 2019-10-15 | 2022-12-13 | マジック リープ, インコーポレイテッド | Cross-reality system using wireless fingerprints |
US11568605B2 (en) | 2019-10-15 | 2023-01-31 | Magic Leap, Inc. | Cross reality system with localization service |
JP2023501952A (en) | 2019-10-31 | 2023-01-20 | マジック リープ, インコーポレイテッド | Cross-reality system with quality information about persistent coordinate frames |
CN112785715B (en) * | 2019-11-08 | 2024-06-25 | 华为技术有限公司 | Virtual object display method and electronic device |
JP7525603B2 (en) | 2019-11-12 | 2024-07-30 | マジック リープ, インコーポレイテッド | Cross-reality system with location services and shared location-based content |
EP4073763A4 (en) | 2019-12-09 | 2023-12-27 | Magic Leap, Inc. | Cross reality system with simplified programming of virtual content |
KR20210081576A (en) | 2019-12-24 | 2021-07-02 | 삼성전자주식회사 | Method for indoor positioning and the electronic device |
CN115066281B (en) * | 2020-02-12 | 2024-09-03 | Oppo广东移动通信有限公司 | Gesture assessment data for Augmented Reality (AR) applications |
WO2021163295A1 (en) | 2020-02-13 | 2021-08-19 | Magic Leap, Inc. | Cross reality system with prioritization of geolocation information for localization |
EP4104144A4 (en) * | 2020-02-13 | 2024-06-05 | Magic Leap, Inc. | Cross reality system for large scale environments |
US11410395B2 (en) | 2020-02-13 | 2022-08-09 | Magic Leap, Inc. | Cross reality system with accurate shared maps |
CN115398314A (en) | 2020-02-13 | 2022-11-25 | 奇跃公司 | Cross reality system for map processing using multi-resolution frame descriptors |
US11551430B2 (en) | 2020-02-26 | 2023-01-10 | Magic Leap, Inc. | Cross reality system with fast localization |
JP2023524446A (en) | 2020-04-29 | 2023-06-12 | マジック リープ, インコーポレイテッド | Cross-reality system for large-scale environments |
US11816887B2 (en) | 2020-08-04 | 2023-11-14 | Fisher-Rosemount Systems, Inc. | Quick activation techniques for industrial augmented reality applications |
WO2022201825A1 (en) * | 2021-03-26 | 2022-09-29 | ソニーグループ株式会社 | Information processing device, information processing method, and information processing system |
US11557100B2 (en) * | 2021-04-08 | 2023-01-17 | Google Llc | Augmented reality content experience sharing using digital multimedia files |
CN113094462B (en) * | 2021-04-30 | 2023-10-24 | 腾讯科技(深圳)有限公司 | Data processing method and device and storage medium |
US11776012B2 (en) * | 2021-12-06 | 2023-10-03 | Jpmorgan Chase Bank, N.A. | Systems and methods for third-party service integration in an application |
US12106590B1 (en) | 2024-02-22 | 2024-10-01 | Capital One Services, Llc | Managed video capture |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100094460A1 (en) * | 2008-10-09 | 2010-04-15 | Samsung Electronics Co., Ltd. | Method and apparatus for simultaneous localization and mapping of robot |
US20140379256A1 (en) * | 2013-05-02 | 2014-12-25 | The Johns Hopkins University | Mapping and Positioning System |
US20150071524A1 (en) * | 2013-09-11 | 2015-03-12 | Motorola Mobility Llc | 3D Feature Descriptors with Camera Pose Information |
US20150193949A1 (en) * | 2014-01-06 | 2015-07-09 | Oculus Vr, Llc | Calibration of multiple rigid bodies in a virtual reality system |
US20170336511A1 (en) * | 2016-05-18 | 2017-11-23 | Google Inc. | System and method for concurrent odometry and mapping |
Family Cites Families (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6176837B1 (en) | 1998-04-17 | 2001-01-23 | Massachusetts Institute Of Technology | Motion tracking system |
US6474159B1 (en) | 2000-04-21 | 2002-11-05 | Intersense, Inc. | Motion-tracking |
US7145478B2 (en) | 2002-12-17 | 2006-12-05 | Evolution Robotics, Inc. | Systems and methods for controlling a density of visual landmarks in a visual simultaneous localization and mapping system |
US7660920B2 (en) * | 2005-09-30 | 2010-02-09 | Rockwell Automation Technologies, Inc. | Multi-rate optimized connection between industrial control scanner and industrial controller |
US20080012935A1 (en) * | 2005-11-22 | 2008-01-17 | Gateway Inc. | Inappropriate content detection and distribution prevention for wireless cameras/camcorders with e-mail capabilities and camera phones |
US8174568B2 (en) * | 2006-12-01 | 2012-05-08 | Sri International | Unified framework for precise vision-aided navigation |
US8751151B2 (en) | 2012-06-12 | 2014-06-10 | Trx Systems, Inc. | System and method for localizing a trackee at a location and mapping the location using inertial sensor information |
US9195345B2 (en) * | 2010-10-28 | 2015-11-24 | Microsoft Technology Licensing, Llc | Position aware gestures with visual feedback as input method |
US8711206B2 (en) * | 2011-01-31 | 2014-04-29 | Microsoft Corporation | Mobile camera localization using depth maps |
CN102426019B (en) | 2011-08-25 | 2014-07-02 | 航天恒星科技有限公司 | Unmanned aerial vehicle scene matching auxiliary navigation method and system |
US9197891B1 (en) | 2012-01-02 | 2015-11-24 | Marvell International Ltd. | Systems and methods for periodic structure handling for motion compensation |
US9552648B1 (en) | 2012-01-23 | 2017-01-24 | Hrl Laboratories, Llc | Object tracking with integrated motion-based object detection (MogS) and enhanced kalman-type filtering |
US10139985B2 (en) | 2012-06-22 | 2018-11-27 | Matterport, Inc. | Defining, displaying and interacting with tags in a three-dimensional model |
EP2871629B1 (en) | 2012-07-03 | 2018-08-15 | Clarion Co., Ltd. | Vehicle-mounted environment recognition device |
US9488492B2 (en) | 2014-03-18 | 2016-11-08 | Sri International | Real-time system for multi-modal 3D geospatial mapping, object recognition, scene annotation and analytics |
US9177404B2 (en) | 2012-10-31 | 2015-11-03 | Qualcomm Incorporated | Systems and methods of merging multiple maps for computer vision based tracking |
US9782141B2 (en) | 2013-02-01 | 2017-10-10 | Kineticor, Inc. | Motion tracking system for real time adaptive motion compensation in biomedical imaging |
US9606848B2 (en) | 2013-03-12 | 2017-03-28 | Raytheon Company | Iterative Kalman filtering |
US9259840B1 (en) * | 2013-03-13 | 2016-02-16 | Hrl Laboratories, Llc | Device and method to localize and control a tool tip with a robot arm |
CN104424382B (en) * | 2013-08-21 | 2017-09-29 | 北京航天计量测试技术研究所 | A kind of multi-characteristic points position and attitude redundancy calculation method |
US9405972B2 (en) | 2013-09-27 | 2016-08-02 | Qualcomm Incorporated | Exterior hybrid photo mapping |
CN103472478A (en) | 2013-09-29 | 2013-12-25 | 中核能源科技有限公司 | System and method for automatically drawing nuclear power station neutron dose equivalent rate distribution diagram |
US9524434B2 (en) | 2013-10-04 | 2016-12-20 | Qualcomm Incorporated | Object tracking based on dynamically built environment map data |
KR102016551B1 (en) * | 2014-01-24 | 2019-09-02 | 한화디펜스 주식회사 | Apparatus and method for estimating position |
CN106164982B (en) * | 2014-04-25 | 2019-05-03 | 谷歌技术控股有限责任公司 | Electronic equipment positioning based on image |
US9996976B2 (en) | 2014-05-05 | 2018-06-12 | Avigilon Fortress Corporation | System and method for real-time overlay of map features onto a video feed |
US9483879B2 (en) | 2014-09-18 | 2016-11-01 | Microsoft Technology Licensing, Llc | Using free-form deformations in surface reconstruction |
EP3078935A1 (en) | 2015-04-10 | 2016-10-12 | The European Atomic Energy Community (EURATOM), represented by the European Commission | Method and device for real-time mapping and localization |
US10146414B2 (en) | 2015-06-09 | 2018-12-04 | Pearson Education, Inc. | Augmented physical and virtual manipulatives |
US9940542B2 (en) | 2015-08-11 | 2018-04-10 | Google Llc | Managing feature data for environment mapping on an electronic device |
US10073531B2 (en) * | 2015-10-07 | 2018-09-11 | Google Llc | Electronic device pose identification based on imagery and non-image sensor data |
US9652896B1 (en) | 2015-10-30 | 2017-05-16 | Snap Inc. | Image based tracking in augmented reality systems |
CN105424026B (en) * | 2015-11-04 | 2017-07-07 | 中国人民解放军国防科学技术大学 | A kind of indoor navigation localization method and system based on a cloud track |
CN105447867B (en) * | 2015-11-27 | 2018-04-10 | 西安电子科技大学 | Spatial target posture method of estimation based on ISAR images |
-
2017
- 2017-05-15 US US15/595,617 patent/US10802147B2/en active Active
- 2017-05-18 CN CN201780013650.1A patent/CN108700947B/en not_active Expired - Fee Related
- 2017-05-18 WO PCT/US2017/033321 patent/WO2017201282A1/en active Search and Examination
- 2017-05-18 EP EP17730977.0A patent/EP3458941A1/en active Pending
- 2017-05-18 CN CN202111282461.9A patent/CN114185427B/en active Active
-
2020
- 2020-05-15 US US16/875,488 patent/US11734846B2/en active Active
-
2023
- 2023-07-20 US US18/224,414 patent/US20230360242A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100094460A1 (en) * | 2008-10-09 | 2010-04-15 | Samsung Electronics Co., Ltd. | Method and apparatus for simultaneous localization and mapping of robot |
US20140379256A1 (en) * | 2013-05-02 | 2014-12-25 | The Johns Hopkins University | Mapping and Positioning System |
US20150071524A1 (en) * | 2013-09-11 | 2015-03-12 | Motorola Mobility Llc | 3D Feature Descriptors with Camera Pose Information |
US20150193949A1 (en) * | 2014-01-06 | 2015-07-09 | Oculus Vr, Llc | Calibration of multiple rigid bodies in a virtual reality system |
US20170336511A1 (en) * | 2016-05-18 | 2017-11-23 | Google Inc. | System and method for concurrent odometry and mapping |
Also Published As
Publication number | Publication date |
---|---|
EP3458941A1 (en) | 2019-03-27 |
US20170336511A1 (en) | 2017-11-23 |
US20200278449A1 (en) | 2020-09-03 |
US10802147B2 (en) | 2020-10-13 |
CN108700947B (en) | 2021-11-16 |
CN108700947A (en) | 2018-10-23 |
US11734846B2 (en) | 2023-08-22 |
CN114185427A (en) | 2022-03-15 |
CN114185427B (en) | 2024-07-19 |
WO2017201282A1 (en) | 2017-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11734846B2 (en) | System and method for concurrent odometry and mapping | |
US10339708B2 (en) | Map summarization and localization | |
US11017610B2 (en) | System and method for fault detection and recovery for concurrent odometry and mapping | |
US10096129B2 (en) | Three-dimensional mapping of an environment | |
US10937214B2 (en) | System and method for merging maps | |
US10073531B2 (en) | Electronic device pose identification based on imagery and non-image sensor data | |
JP6198230B2 (en) | Head posture tracking using depth camera | |
US9111351B2 (en) | Minimizing drift using depth camera images | |
EP3335153B1 (en) | Managing feature data for environment mapping on an electronic device | |
Alcantarilla et al. | Visual odometry priors for robust EKF-SLAM | |
CN112465907B (en) | Indoor visual navigation method and system | |
US11935286B2 (en) | Method and device for detecting a vertical planar surface | |
Suhm | Vision and SLAM on a highly dynamic mobile two-wheeled robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: GOOGLE LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:064461/0013 Effective date: 20170929 Owner name: GOOGLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NERURKAR, ESHA;LYNEN, SIMON;ZHAO, SHENG;REEL/FRAME:064454/0141 Effective date: 20170512 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |