WO2016162568A1 - Method and device for real-time mapping and localization - Google Patents
Method and device for real-time mapping and localization Download PDFInfo
- Publication number
- WO2016162568A1 WO2016162568A1 PCT/EP2016/057935 EP2016057935W WO2016162568A1 WO 2016162568 A1 WO2016162568 A1 WO 2016162568A1 EP 2016057935 W EP2016057935 W EP 2016057935W WO 2016162568 A1 WO2016162568 A1 WO 2016162568A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- map
- scanner
- pose
- data
- real
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 230000004807 localization Effects 0.000 title claims abstract description 42
- 238000013507 mapping Methods 0.000 title claims abstract description 33
- 230000008859 change Effects 0.000 claims abstract description 31
- 238000004458 analytical method Methods 0.000 claims abstract description 26
- 238000005457 optimization Methods 0.000 claims description 57
- 230000033001 locomotion Effects 0.000 claims description 29
- 239000013598 vector Substances 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 15
- 230000007246 mechanism Effects 0.000 claims description 10
- 238000012800 visualization Methods 0.000 claims description 9
- 238000010276 construction Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 5
- 238000007689 inspection Methods 0.000 claims description 4
- 238000012544 monitoring process Methods 0.000 claims description 4
- 238000013461 design Methods 0.000 claims description 3
- 239000012634 fragment Substances 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000003860 storage Methods 0.000 claims description 3
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 238000007670 refining Methods 0.000 claims description 2
- 230000009466 transformation Effects 0.000 description 19
- 230000008901 benefit Effects 0.000 description 15
- 230000008569 process Effects 0.000 description 15
- 238000000844 transformation Methods 0.000 description 14
- 238000013459 approach Methods 0.000 description 11
- 238000012549 training Methods 0.000 description 10
- 238000013519 translation Methods 0.000 description 10
- 230000014616 translation Effects 0.000 description 10
- 238000007792 addition Methods 0.000 description 9
- 238000002474 experimental method Methods 0.000 description 9
- 239000000203 mixture Substances 0.000 description 8
- 238000012360 testing method Methods 0.000 description 7
- 238000009826 distribution Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 238000009472 formulation Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000010354 integration Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 239000002245 particle Substances 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000012217 deletion Methods 0.000 description 3
- 230000037430 deletion Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 101000785626 Homo sapiens Zinc finger E-box-binding homeobox 1 Proteins 0.000 description 1
- 238000012952 Resampling Methods 0.000 description 1
- 102100026457 Zinc finger E-box-binding homeobox 1 Human genes 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000001464 adherent effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005266 casting Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000013065 commercial product Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000011038 discontinuous diafiltration by volume reduction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 101150109310 msrAB1 gene Proteins 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/42—Simultaneous measurement of distance and other co-ordinates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4808—Evaluating distance, position or velocity data
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0272—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising means for registering the travel distance, e.g. revolutions of wheels
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/08—Volume rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/579—Depth or shape recovery from multiple images from motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/16—Using real world measurements to influence rendering
Definitions
- the present invention generally relates to localization and mapping, especially in GPS-denied environments, such as indoors.
- inertial sensors e.g. DLR's FootSLAM, Chirange Geospatial Indoor Tracking. They are small and inexpensive, however the position accuracy is low and drifts significantly over time. Furthermore, inertial systems do not generate map information. Therefore, they are only suitable for positioning and navigation purposes, not for map generation.
- ZEB1 is a commercial product that uses 3D laser scanning for fast (indoor) mapping.
- the laser is mounted on a spring and an oscillating movement needs to be created by hand. It generates an accurate 3D model of the indoor environment.
- the system does not provide immediate feedback to the user, as data processing is carried out off-line. Hence, the system is suitable for mapping but not for real-time localization.
- a still further solution is a laser backpack developed at UC Berkley. It is a R&D project which proposes a backpack equipped with several 2D line scanners used to generate a 3D model of indoor environments. Again, it does not provide for on-line visualization.
- LOAM Lidar Odometry and Mapping
- the present invention proposes, in a first aspect, a method for constructing a 3D reference map of an environment useable in (a method for) real-time mapping, localization and/or change analysis, comprising the following steps:
- the local trajectory optimization module comprises a local window mechanism optimizing a trajectory fragment composed by a set of poses and their associated point clouds with respect to a map built up to the last registered set, wherein points are preferably converted in world coordinates using pose interpolation in ⁇ E3 group and wherein a generalization of Iterative Closest Point method is preferably used to find the trajectory that better aligns all the points to the map; wherein the local window mechanism operates such that, when the distance between the first and the last pose in the list is larger than a threshold, cloud poses are optimized and a new list is produced with the refined pose and the input clouds.
- the data structure is set to natively handle 3D points and is based on a hybrid structure composed by a sparse voxelized structure used to index a (compact dense) list of features in the map presentation, allowing constant time random access in voxel coordinates independently from the map size and efficient storage of the data with scalability over the explored space.
- the data structure may maintain five different representations of the data stored, thereby granting consistency between internal data representations after each map update, the five representations being
- a sparse voxel representation V built with a parametrizable cell size, that stores in each cell, v,. e , the index of its corresponding feature in L , wherein features in L and cells in V are related in a one-to-one manner, based on the position of / ; and the cell size of V , and
- the present method may further comprise the step, wherein, given an area of interest expressed by a central position and a radius, inner features are selected by looping over the elements stored in L and the kd-tree K is rebuilt as a fast mechanism for nearest neighbor searches.
- the invention relates to a method for real-time mapping, localization and change analysis of an environment, i.e. relative to the 3D reference map of the environment which is available from a method according to the first aspect of the invention as described above or from a such a 3D reference map already updated or modified through a previous run of the present method, in particular in a GPS-denied environment, preferably comprising the following steps:
- step (b) comprises the identification of a set of possible locations of the scanner based on the scanner data of step (a), said step (b) further the following substeps:
- the candidate locations have a descriptor similar to q , i.e. the distance in descriptor space is smaller than a threshold radius r .
- the set of candidate locations ⁇ can be recovered by performing a radial search on T given a threshold radius r in the descriptor space, preferably for 360° horizontal view scanner data, increasing the candidate locations by computing additional input descriptors by horizontally shifting range values, each descriptor corresponding to the readings that the scanner would produce if rotated on its local axis and then rotating according to i each resulting set of candidate locations,
- step (b) further comprises the substeps
- step (b4) updating the set of candidate locations while the sensor moves by estimating the movement (using the odometer module as described in step (c)(i) of the method of the first aspect above) and re-evaluating the weight for each initial candidate pose based on the query results at the new pose, and
- the method may comprise the following steps
- n by the points around its centroid, n , by extracting a subset of voxels that represent candidate floor cells, ⁇ ' — v , by checking that the vertical component in their associated normals is dominant, i.e. n ' ( ⁇ ) where ⁇ is typically a value between 0.5 and 1
- the maximum vertical step size and 8 in (8) stands for the simplified volume of the observer, centered over the floor cell * >' .
- the map structure useable in the context of the present invention preferably comprises two different lists of elements that are stored and synchronized: a (compact) list of planes, L , and a (dense) grid of voxels, V , built with a specific voxel size, each plane ; e L storing a position in world coordinates, Pi , and a unit normal, n i ; wherein each voxel, v i e V stores a current state that can be either full, empty or near, full voxels storing an index to the plane I e l , whose associated position falls into, empty cells storing a null reference and near cells storing an index to the plane l v e l whose associated position distance d v to the voxel centre is the smallest; preferably a near voxel is considered only if the distance d v is under a given threshold d max , otherwise the voxel is considered empty
- the invention proposes a mobile laser scanning device for real-time mapping, localization and change analysis, in particular in GPS-denied environments, implementing one or more of the methods described herein.
- the invention relates to a mobile laser scanning device for real-time mapping, localization and change analysis, in particular in GPS-denied environments, comprising a real-time laser range scanner, a processing unit, a power supply unit and a hand-held visualization and control unit, wherein the real-time laser range scanner is capable of acquiring the environment with a rate of at least 5 frames, preferably at least 10 frames per second to provide scanner data, the processing unit is arranged to analyze said scanner data and to provide processing results comprising 3D map/model, localization and change information to the hand-held visualization and control unit, which is arranged to display said processing results and to allow a user to control the mobile laser scanning device.
- a device is thus capable of on-line, real-time processing providing 3D mapping/modelling of the environment, precise localization of the user (with respect to generated map or existing map/model), change detection with respect to previously acquired model and relies fully on laser signal which makes it independent of ambient illumination and GPS signal. Moreover, it does not require additional sensors such as GPS or inertial sensors. Nonetheless, the present invention does not exclude adding further sensors if deemed useful. Thus, optional sensors may be added to enrich the generated model (e.g. color cameras). Furthermore, although the device is capable of providing on-line and real-time results to the user, it is further foreseen to use the acquired data and to further process it off-line, e.g. for refinement of acquired 3D model for future localization and change analysis.
- the device according to the present invention may be used and is useful in numerous applications such as e.g. 3D (indoor) mapping/modelling, facility management, accurate, real-time indoor localization and navigation, design information verification, change analysis (e.g. for safeguards inspections), progress monitoring (e.g. for civil construction), disaster management and response, etc.
- the visualization and control unit is preferably a touch screen computer, more preferably a tablet computer.
- the mobile laser scanning device is most preferably a backpack or vehicle mounted device.
- the invention proposes the use of methods or of mobile laser scanning devices as described herein for 3D outdoor and indoor, preferably indoor mapping/modelling; facility management; accurate and real-time indoor localization and navigation; assistance to disabled or elderly people; design information verification; change analysis, such as for safeguards inspections; progress monitoring, such as for civil construction; or disaster management and response.
- a fifth aspect concerns a computer program product having computer executable instructions for causing a programmable device, preferably a mobile laser scanning device or its processing unit as described herein to execute one or more of the methods of the present invention.
- the invention also relates to a computer-readable medium, having stored therein data representing instructions executable by a programmed processor, the computer-readable medium comprising instructions for causing a programmable device, preferably a mobile laser scanning device of the invention or its processing unit, to execute one of the present methods.
- Fig. 1 Hardware components of a preferred embodiment of a mobile laser scanning device according to the present invention, called Mobile Laser Scanning Platform (MLSP system, or simply MLSP) comprising a 3D laser scanner 1 , a backpack 2, a processing unit 3 (contained within the backpack, shown separately for illustration only) and a tablet 4.
- MLSP system Mobile Laser Scanning Platform
- Fig. 2 Snapshot (black-and-white of an originally colored snapshot) of the user interface as it is provided to the user in real-time. It shows a tunnel environment which has been scanned at two points of time. In the actual color display, green indicates no change between the acquisitions and red indicates new constructions between the two acquisitions.
- Fig. 3 Effect of the loop closure on a sample track of the Kitti datasets.
- the trajectory is shown as estimated on-line and as globally optimized trajectory.
- the (actual) map is colored according to the normal vectors of the points with a different scheme for the two maps (such as violet area being the local map).
- Fig. 4 Point selection time for both local map (typically containing less than 1 M features) and global map (typically containing more than 1 M features), Fig. 4(a) Local map point selection time for different search radius and Fig. 4(b) Global map point selection time for different search radius.
- Fig. 5 Preferred embodiment of proposed map representations. Full cells are displayed as dark gray boxes. Near cells are represented as light gray boxes with a line connecting their centroid with the associated nearest neighbor. Empty cells are displayed as white boxes.
- Fig. 6 Rotation histograms for a symmetric environment and for a non-symmetric one.
- Fig. 7 Inlier selection.
- Axes represent the main dominant dimensions of the detected transformations.
- Each point represents a candidate transformation grayed according to the iteration in which they have been marked as outliers (some outlier transformations too far from the center have been omitted).
- Dark gray points in the central ellipses represent transformations marked as inliers.
- the ellipses represent the normal estimations at specific subsequent iterations.
- Fig. 8 Results of the floor extraction algorithm. Black points represent the scanner positions during acquisition. These locations have been used to automatically select the set of initial active cells.
- Fig. 9 Empirical parameter selection for the search space reduction, (left) deviation of the sensor with respect to the mean height to the floor observed during several walkthroughs, (right) deviation of the sensor with respect to the vertical axis (Z) observed during several walkthroughs.
- Fig. 10 Drift analysis using only place recognition (tracking by classification) where the classifier contains data related to multiple environments. The ground truth for such experiment is considered the final trajectory generated by the tracking module initialized in the correct building for a description of the adopted error metric.
- Fig. 1 1 (a) and (b): Two sample paths used to generate the drift analysis shown in Figure 10. Dashed line the ground truth path estimated using the complete system. Solid line the path estimated using tracking by classification. The black circle shows the frame after which the user has been uniquely identified in a specific building.
- Fig. 12 Results of the proposed inlier selection algorithm.
- Fig. 13 Results of the odometer integration during a sample walk-through inside a building where the sensor moves to a non-mapped room (A, illustrated in (right)) without losing track of its position and, then, it performs two loops outside the building (C and
- Fig. 14 Tracking accuracy comparison between the standard ICP (dashed line) and the proposed robust implementation (solid line) in an environment without outliers (upper) and in an environment with outliers (lower).
- Fig. 15 System overall performance during tracking for a backpack mounted setup: solid gray lines are the time spent in processing each frame (in seconds). The dashed horizontal line indicates the maximum execution time for real-time results using a Velodyne HDL-32E sensor (12Hz).
- a basic workflow for a previously unknown (unscanned) location requires in principle two steps: (A) the construction of a 3D reference model at TO and (B) the localization, tracking and change analysis based on 3D reference model at T1 . When revisiting such a location or in cases where an appropriate map already exists, step (B) will be sufficient.
- the 3D reference map is built using a 3D SLAM (Simultaneous Localization And Mapping) implementation based on a mobile laser range scanner as described below.
- the main features preferably are:
- the SLAM framework (see section A.3. below) contains:
- the odometer is typically performed in real-time.
- the map optimization can be carried out in a post-processing step.
- the real-time localization, tracking and change analysis generally requires an existing 3D reference map of the environment which has been previously been generated as described above.
- the main components preferably are
- the system identifies the current location inside the known environment with no prior knowledge of the sensor pose. It pre-computes simple and compact descriptors of the scene and uses an efficient strategy to reduce the search space in order to self-localize the sensor in real-time (see section B.2. below).
- the system starts tracking the sensor pose by registering the current observation (3D scan) inside the 3D reference map using the standard Iterative Closest Point (ICP) method.
- ICP Iterative Closest Point
- the system implements a number of improvements, e.g. it employs a data structure specially designed for fast nearest neighbor searches (see section B.3. below).
- MLSP Given the nearest-neighbor information in the data structure, MLSP can efficiently calculate the distance between each scan point in the current observation and nearest point in the 3D reference model.
- the change analysis is performed by applying a simple threshold to this distance, e.g. each point in the current scan which does not have a corresponding neighbor in the reference model (or which has a corresponding neighbor in the reference model but which that is farther than the threshold) is considered a change.
- the real-time user interface shows the 3D reference model and the current observations which are color-coded according to a change/no-change classification.
- the present invention proposes an optimization method or framework for Simultaneous Localization And Mapping (SLAM) that properly models the acquisition process in a scanning-while-moving scenario.
- SLAM Simultaneous Localization And Mapping
- Each measurement is correctly reprojected in the map reference frame by considering a continuous time trajectory which is defined as the linear interpolation of a discrete set of control poses in ⁇ E3.
- the invention also proposes a particularly efficient data structure that makes use of a hybrid sparse voxelized representation, allowing large map management. Thanks to this the inventors were also able to perform global optimization over trajectories, resetting the accumulated drift when loops are performed.
- SLAM Simultaneous Localization And Mapping
- SLAM formulations have been proposed for standard cameras, depth cameras and laser scanners.
- Most SLAM systems based on laser scanners use variations of the Iterative Closest Point (ICP) algorithm to perform scans alignments.
- ICP Iterative Closest Point
- a review of ICP algorithms focused on real time applications can be found in S. Rusinkiewicz and M. Levoy, "Efficient variants of the ICP algorithm," in 3DIM, 2001 .
- Real time moving 3D LIDAR sensors, such as Velodyne scanners recently gained popularity: these devices have a high data rate, often provide a complete 360° horizontal field and have a good accuracy on distance measurements.
- Such sensors acquire measurements while moving and thus represent non-central projection systems that warp acquired frames along the trajectory path. Alignment of such produced point clouds requires a proper treatment of the warping effect on the 3D points.
- the SLAM framework proposed in F. Moosmann and C. Stiller, "Velodyne SLAM,” in IVS, 201 1 , unwarps each cloud given the current speed of the sensor, performs ICP and unwarps again the points with the new estimated speed.
- LOAM algorithm J. Zhang and S.
- Singh “LOAM: Lidar odometry and mapping in real-time,” in RSS, 2014) performs a continuous estimation of the motion by focusing on edges and planar features to remove the warping effect in each cloud. When a complete frame is generated it unwarps the final point cloud using the predicted final pose.
- GPGN Gaussian Process Model
- a local window mechanism that optimizes a trajectory fragment composed by a set of poses and their associated point clouds with respect to the map built up to the last registered set. Points are converted in world coordinates using pose interpolation in ⁇ E3 group and a generalization of ICP is used to find the trajectory that better aligns all the points to the map. In this formulation the unwarp operation is part of the optimization strategy.
- An important aspect for SLAM systems is their scalability to large environments and a real time management of the map to support the optimization routine.
- the invention focuses on a data structure that natively handles 3D points and that is based on a hybrid structure composed by a sparse voxelized structure, which is used to index a compact dense list of features. This allows constant time random access in voxel coordinates independently from the map size and efficient storage of the data with scalability over the explored space.
- the presently proposed structure is capable of maintaining in memory the entire global map and to update local sections in case graph optimization is employed (e.g. to perform loop closures).
- Main contributions of some embodiments of the invention are (i) the use of a generalized ICP algorithm incorporating the unwarping in the estimation process, (ii) the use of an efficient structure for the map management that allows both fast spatial queries and big environment management.
- the inventors have validated their approach using publicly available datasets and additional acquired indoor/outdoor environments.
- Section A.2. presents the data structure for map management and its available operations; Section A.3. presents the optimization framework; Section A.4. shows experimental results obtained with this method and, Section A.5. draws some conclusions.
- a data structure suited for real-time SLAM applications should provide (i) random sample access in constant time (on average) to stored features, (ii) exhaustive feature iteration in linear time w.r.t. the number of elements stored and (iii) fast nearest neighborhood searches given a query feature. Moreover, it should provide (iv) scalability over the explored space and (v) it should efficiently support feature addition and removal.
- Property (i) is generally associated to dense voxel representations, where memory requirements for scalability (iv) are the major drawback and exhaustive explorations (ii) are slow.
- Property (ii) conversely, is associated to sparse structures, where memory requirements (iv) are very low, but random access times (i) are slow (logarithmic in case of kd-trees).
- the proposed preferred map structure maintains five different representations of the data stored. Consistency between internal data representations should be granted after each map update.
- a sparse voxel representation V built with a parametrizable cell size, that stores in each cell, v i & V , the index of its corresponding feature in L .
- Features in L and cells in V are related in a one-to-one manner, based on the position of / ; and the cell size of V .
- the present sparse voxel representation is based on an OpenVDB structure (K. Museth, "Vdb: High-resolution sparse volumes with dynamic topology," ACM Transaction on Graphics, vol. 32, no. 3, 2013).
- a kd-tree, K that is used to perform nearest neighborhood searches on the map. K only stores references to the dense list L to keep its memory footprint low.
- the kd-tree can be built on a local region of the map if required (e.g. following an area around the last observation location).
- the proposed data structure is modified as follows: consider the feature's world position, p w and compute its corresponding voxel cell, v i . If the cell is already filled (v i ⁇ 0 ), its associated information is retrieved from l v and the value is updated if required. Otherwise (v ; ⁇ 0 ) a new feature is added to the structure. To do so, the insertion position, j , in L is computed as follows:
- the inventors propose in a particularly preferred embodiment to introduce a compact operation that populates the holes with the last elements in the lists by performing a swap in both L and M vectors. Affected values in V are then updated according to the new positions and last is moved to the new last element of the compacted list.
- inner features may be selected by looping over the elements stored in L (linear cost to the number of samples in the map) and the kd-tree K is rebuilt.
- Elements in K only store a reference to the associated features in L , thus K memory space is kept small (linear in the number of features present in the area of interest) and constant on average. The same operation can be performed without iterating over the entire list by visiting the voxel structure. The inventors investigate in the experimental section the differences between these two mechanisms.
- cloud additions are preferably postponed until a new kd-tree is required.
- already existing features in the map outside the area of interest are deleted, creating new holes.
- postponed clouds are added, by only adding the features that are inside the interest area. This way, previously created holes are filled with the new samples in constant time. If after all the additions there are still holes (more features were deleted than added), a compact operation may be performed, with a linear cost with respect to the remaining number of holes.
- K is rebuilt using the elements of L and can be used until a new one is required.
- a preferred optimization framework of the invention is composed by two consecutive modules: an odometer that estimates the pose of each cloud given a map and a local trajectory optimizer that refines the trajectory of a set of clouds. Both modules employ the map data structure as described herein to handle the growing map.
- Each feature stored in the map M is composed by a point world position p w , its normal unit vector n w and additional information (e.g., reflectance). The latter are not used in the registration steps.
- This framework can also be extended to perform a global trajectory optimization that allows reconstructing an entire map of the environment taking advantage of loop closures.
- the input of such a framework is a set of 3D point clouds ⁇ CJ produced with the data streamed by the sensor (in case of a Velodyne scanner, the point cloud is generated after a complete revolution of the sensor).
- Relative timestamps are assigned such that the first point produced has timestamp 0 and the last one has 1 .
- Normal unit vectors may be estimated with the unconstrained least square formulation proposed in H. Badino, D. Huber, Y. Park, and T. Kanade, "Fast and accurate computation of surface normals from range images," in ICRA, 201 1 , taking advantage of box filtering on the point cloud grid structure.
- Each registered cloud C with its associated pose T * ODO is added to a list of registered clouds RC ODO :
- This module takes as input the list of clouds with their associated poses RC 0D0 and performs a trajectory refinement by employing a local window approach.
- the objective function ⁇ ?( ⁇ ) minimized in this step is the sum of the individual alignment errors of each cloud ⁇ ?.( ⁇ ) :
- Y nt represents the world pose interpolated at time t . associated with the point p s selected for the registration.
- p w the estimated world coordinates of the current point, p ⁇ and are respectively its closest point retrieved from the map and its associated normal.
- the entire objective function is minimized by alternating a Gauss-Newton step and the search for new correspondences in the map, until a convergence criterion is satisfied or a maximum number of iterations is reached.
- the Jacobians of the terms in the objective function are evaluated by applying the composition rule as
- Equation 1 Each term ⁇ ?.( ⁇ ) in Equation 1 involves a pair of ⁇ 2
- the list RC mF can be updated with the optimized poses. Then, the entire set of points and normals of the clouds are converted into world coordinates according to Equation 3 and then added to the map M . At this stage one takes advantage of the efficient strategy to update the local map described in section A.2.: before adding points, one firstly deletes from the map all points that are further than a given radius from the last trajectory pose and then one adds the transformed clouds from RC mF . Once the map is updated a new kd-tree is created on the resulting points to allow subsequent nearest neighbor searches. The list RC ODO is cleared and the odometer guess for the next cloud registration is updated according to the last two poses of RC mF .
- the proposed formulation represents an adherent description of the real sensor model, which acquires points while moving: point transformations in world coordinates involve both initial and final poses of each cloud. Moreover, the estimation of each pose (apart the first and the last) is directly influenced by two clouds.
- the proposed framework can be extended to perform an off-line global optimization of the trajectory.
- a limit of the proposed local trajectory optimizer consists in the inability to refine points (and consequently poses) that have already been added to the map. This limitation is generally acceptable when exploring environments at local scale but, when moving in very large environments, drift can be accumulated. For these cases, global optimization techniques that exploit loop closures or external absolute measurements have to be taken into account.
- the inventors propose a global trajectory optimization that makes use of an enriched map description: for each feature in the map one adds to its position and normal in world coordinates (p ⁇ and n ⁇ ), the original coordinates of the point p £ and the normal unit vector n L in the local sensor reference frame, the relative timestamp t and the index ID of the cloud that originates it. It can be noticed that, given the cloud index and a trajectory, local coordinates of points and normal are redundant information, but the inventors prefer to store them to avoid recomputations.
- the inventors also propose to employ two maps M l and M g , respectively a local and a global map.
- M is used by the odometer and the local trajectory optimizer modules. When one needs to remove points from M l one moves them to the global map instead.
- the selected correspondences used in the generalized ICP are added to a list
- T w q T ID - exp3 ⁇ 4 . * log(r ⁇ _ T 1D ))
- VNN L NN. ⁇ VNN.
- Equation 5 still represents a generalized point-plane ICP, where both the query and the model point are expressed in local coordinates and transformed into world coordinates with the poses associated to their clouds and the interpolation timestamps.
- This optimization can be applied to a complete sequence of clouds to refine an entire trajectory.
- the correspondences representing the loop allow estimating a trajectory that refines the entire poses, constraining the loop to close correctly.
- a first dataset was acquired by an operator carrying the sensor while exploring an indoor environment of about 10 x 35 x 3 meters.
- a second dataset was acquired in an indoor industrial building of about 16 x 65 x 10 meters.
- a third dataset was acquired with the sensor mounted on the roof of a car while driving in normal traffic conditions performing four loops in a town district, each one about 500 meters long.
- Kitti training datasets also makes available a GPS measured ground truth of each single track.
- the provided 3D point clouds though, have been already unwarped using the estimated motion of the on-board odometry system. For this reason the inventors made use of only those training tracks for which the native raw data was available.
- the local trajectory optimization can be employed to generate high definition local 3D models of the acquired environments.
- the inventors have processed the two indoor datasets using a voxel resolution of 1 cm with a threshold to trigger the local optimization of 2 m. This results in approximately 8 million of points for the first dataset and approximately 24 million for the second.
- a reference model has been created by pairwise registering scans of the environment taken with the high resolution ZF 5010C scanner using the method of J. Yao, M. R. Ruggeri, P. Taddei, and V. Sequeira, "Automatic scan registration using 3d linear and planar features," 3D Research, vol. 1 , no. 3, pp. 1-18, 2010.
- the inventors have accurately registered the two models and computed the point-point distances between them. No visible distortions are present in the models and the histograms of the distances between the two clouds have peaks lower than 0.02m, which is within the nominal accuracy of the Velodyne HDL-32E sensor used.
- the inventors have run the present framework on all Kitti training datasets using as input data the raw readings of the sensor (10 tracks in total). Moreover, to demonstrate the benefit of incorporating the sensor motion in the optimization framework, they have also run the present system on the same tracks but employing the official preprocessed clouds of the datasets (unwarped using the estimated motion of the on-board odometry system). In this case the inventors did not perform any unwarp during the optimization (i.e., they used only the odometry module). For these experiments they used a voxel size of 15 cm in the maps and they did not perform loop closures.
- Figure 2 shows both experiment results in terms of average relative translation and rotation error generated using trajectory segments of 100m, 200 m, 800 m length (refer to H. Strasdat, "Local accuracy and global consistency for efficient slam.” Ph.D. dissertation, Imperial College London, 2012 for a description of the adopted error metric). It is clear that by incorporating the cloud unwarping into the optimization framework yields better results and reduces both translational and rotational drift (in particular translation error improved by 0.3 point percentage on average). Notice that the current state of the art algorithm for the Kitti benchmark that only employs LIDAR data (LOAM) performs better. It must be noted though that it has been validated directly on the original unwarped point clouds and that it processes clouds only at 1 Hz.
- LOAM LIDAR data
- Velodyne SLAM in IVS, 201 1 ] is the presence of a map refinement strategy (called adaption) based on a set of heuristic tests that positively influence the trajectory estimation. Since the present system is focused on the optimization strategy by proper modeling the problem, the inventors disabled this feature in the original work to focus the analysis on the trajectory estimation. Results after each loop are shown in Table I. It can be noticed that one accumulates less drift than the original work. Moreover the present system is a natural formulation of the problem that requires less configuration parameters than the heuristic strategies of the Velodyne SLAM.
- Performance of the present system is superior to the Velodyne SLAM system both in terms of execution time and in the ability of maintaining a global map of the environment, while in the original work only a local map is maintained.
- the ability of using the global map has been confirmed, in case of the use of loop closure and the global optimization technique to correct the drift accumulated in the first loop and the use of the global map for the next loops.
- the inventors In order to evaluate the performance of the proposed map representation, the inventors have measured the execution time of each operation while running the outdoor car dataset on a PC equipped with an Intel Xeon E5-2650 CPU.
- addition operations are performed in a linear time w.r.t. the number of features added to the map, being the average time 36.4ns per feature, which gives an average cloud insertion time of 1 .81 ms for the HDL-32E sensor.
- Features to be deleted are selected by performing a radius search around the point of interest (e.g. the last estimated sensor pose) and added to the global map. Results show a constant deletion time per feature that takes on average 30.84ns.
- Selection of features to be deleted from the local map can be performed in two manners: by using the voxel structure or by iterating over the dense list.
- Figure 4(a) shows the average search times based on the number of features stored in the map and the search radius.
- using the dense list always provides the same performance (linear to the number of features stored, independently of the search radius).
- voxel search times increase as the radius does and, in all the cases, present worst results.
- the system is able to process clouds at 12.93Hz (i.e., in real time w.r.t. the Velodyne acquisition rate) when the local trajectory optimization is not active, while the frequency decreases to 7.5Hz using the local trajectory optimization, which is close to real time. It has to be noticed that the registration and the local optimization are not coded to run in multi-thread, thus the inventors expect that performance can be increased both in the odometer and in the local optimization.
- the present document presents a framework for local optimization of point clouds acquired using moving lasers.
- the inventors incorporated the acquisition motion into the optimization by interpolating each acquired point cloud between its starting and ending position.
- the inventors experimentally showed, using publicly available datasets, that by correctly modelling the sensor movement it is possible to reduce odometry estimation errors.
- [001 15] Moreover, they present an efficient data structure to manage large voxelized 3D maps constituted by sparse features.
- the map data structure is suited for both local map optimization and for offline global optimization.
- Their experiments show that, for the former problem, such a structure provides real-time odometry and nearly real time local refinement. These performances may even be enhanced by taking advantage of multi- thread operations when local trajectory optimization is performed (e.g., nearest neighbor search, cloud unwarping).
- a preferred system performs an efficient selection of points to be used in the registration process that ensures good geometric stability for the ICP algorithm. Then, a strategy to efficiently discard outliers ensures that registration is performed only using correspondences that are globally consistent (inliers).
- the present preferred framework fuses in the registration process w.r.t. the ground truth model a robust odometer that is capable of real time tracking even when the user leaves the map or if the observed environment differs too much from the initially acquired model (e.g. furniture were changed). By re-entering the known map the system automatically recovers the correct position and thus avoids drift accumulation. [00120] B.1 . Main benefits of the preferred embodiments described below
- Section B.2. presents a preferred online place recognition and relocalization strategy
- Section B.3. shows how to perform online tracking once the user pose has been identified in a known environment.
- Section B.4. presents experimental results and finally Section B.5. draws the conclusions.
- the place recognition component deals with recovering an initial estimate of the user location and orientation without a priori information. It is able to run online at frame rate to provide candidate locations given the current sensor observation. Moreover, for scalability purposes, it should not make use of the map model during execution since it might provide candidate poses related to distant locations (and thus not loaded in memory), or even different maps.
- a pre-processing stage is preferably introduced in order to (1 ) reduce the search space of available poses and (2) train a robust and compact classifier that, given an observation, efficiently estimates the possibility of being in a specific location.
- V cellSize in (6) stands for the maximum step distance (e.g. 0.5 meters for a walking motion, or cellsize for a car motion)
- ⁇ ⁇ in (7) stands for the maximum vertical step size
- C (l) in (8) stands for the simplified volume of the observer, centered over the floor cell g ; (a bounding cylinder in the present implementation).
- Initial reachable cells can be provided manually but, since the generation of the map is preferably performed by placing the scanner over reachable cells, this initialization can be automatically performed assuming floor cells below the acquisition positions as reachable.
- detecting all floor cells F' F is performed in a flooding-algorithm style, as illustrated in Table III showing algorithm where, initially, A stores the first set of reachable cells.
- Table III Flooding floor extraction.
- the inventors In order to further reduce the navigable space without loss of precision, the inventors also propose to introduce physical constraints related to particular operability of the system (e.g. vertical and angular limits on the possible sensor pose for a specific sensor mounting) that provides an effective navigable space N * cN .
- constraints are empirically selected by running a set of experiments on sample datasets (see Section B.4.).
- the pose classifier is used to recover the most possible locations given the current observation.
- the inventors split the process in two different stages: the initialization, which deals to the estimation of possible locations when no a priori information is available and the update, which deals with the evaluation of candidate locations and the resampling of them.
- [00138] 1 Given the last sensor observation one computes its associated descriptor q and recovers a set of candidate locations ⁇ performing a radial search on T given a threshold r in the descriptor space. In case of sensors providing 360 horizontal field of view, one may increase the candidate locations by computing additional input descriptors by horizontally shifting the range values. Each descriptor corresponds to the readings that the sensor would produce if rotated by on its local axis. Each resulting set of candidate locations are then rotated according to i .
- weights are normalized to have a maximum value of 1 .
- Equation (9) computes the weight associated to each possible location using the Bayes theorem expressed in possibility theory alike in Dubois, D., 2006. Possibility theory and statistical reasoning. Computational Statistics and Data Analysis 51 (1 ), 47 - 69, the Fuzzy Approach to Statistical Analysis. Individual terms of (9) are: n(q
- I ⁇ ) l - ⁇ II
- Equation (10) estimates the possibility of the descriptor q , given the pose in the same way as in step 2 of the initialization stage (In case of multiple input descriptors , each must be taken into account individually). Equation (1 1 ) evaluates the likelihood of being at pose f . by finding the most compatible location in the set of potential locations ⁇ . This compatibility is defined as the weighted relative distance (Equation (12)) between the previous potential pose T k and pose . Equation (13) estimates the distinctiveness of the current observation by comparing the number of neighbors retrieved w.r.t. the size of the training set, e.g. extremely ambiguous poses like in corridors will produce lots of results, resulting in high ambiguity.
- the update stage is iterated until potential poses converge to a single location, i.e. when the covariance of the centroid of ⁇ computed according to weights w is small. At this point one considers the problem solved and the pose tracking component is started.
- the preferred system outlined above is based on an iterative re-weighting of possible locations with fast bootstrapping that uses a single sensor observation.
- a key factor for scalability to large maps is the pre-computation of lightweight descriptors from the reference maps and their organization in a kd-tree structure with associated poses. This way, queries in the descriptor space are used to efficiently populate the system with candidate locations given the first observation. Then, in subsequent update steps the estimated motion and queries in the descriptor space are used to draw a new set of possible locations and their associated weights.
- the present place recognition component instead, only needs a fast and rough pose estimation, since precise pose tracking is performed by the subsequent tracking component (Section B.3.) once a unique location has been identified. Moreover, the present system only has to ensure that possible locations are not discarded and thus does not require a precise sensor pose probability density estimation. For this reason, one does not require a dense sampling of the navigable space, as Section B.4. shows. However a low sampling density may lead to tracking loss in certain cases due to wrong particle initialization. This problem is overcome by drawing a new set of particles each time the update stage is performed.
- the pose tracking component deals with computing the local motion of the sensor as it moves around the environment. Knowing the previous estimated pose, when a new acquisition is received, the inventors perform a local registration between the map and the observed points. From the resulting transformation, the implicit motion is inferred and applied to the previously estimated pose.
- a compact list of planes, L and a dense grid of voxels, V , built with a specific voxel size.
- Each plane / ; e L stores a position in world coordinates, i , and a unit normal, n i .
- Each voxel, v i G V stores a current state that can be either full, empty or near.
- Full voxels store an index to the plane l v & L , whose associated position falls into.
- reference map points belonging to the voxel are used to estimate the plane parameters.
- Empty cells store a null reference and near cells store an index to the plane l v ⁇ L whose associated position distance d v to the voxel centre is the smallest. Notice that the inventors preferably consider near voxel only if the distance d v is under a given threshold d max , otherwise the voxel is considered empty.
- Equation (14) the error introduced in the point- plane distance is proportional to the point distance w.r.t. the sensor and to the angle between its normal and the viewing ray. This leads to selecting far points and points whose normal is as perpendicular as possible w.r.t. the viewing ray.
- moving laser produces non uniformly distributed points and, in particular, distant areas are acquired with a lower point density and thus provide poorly estimated normals.
- angles between viewing rays and normals vanishes.
- the inventors preferably solve these problems by explicitly distinguish between translations and rotations. In order to properly constrain translations, they consider only point normals. They compute the covariance matrix for translations C, as:
- Equation (15) weights the influence of a given point normal according to its perpendicularity to the rotation axis (the more perpendicular the higher the weight).
- numerator estimates the quality on locking rotations over x by computing the angle between the vector connecting the centre of rotation to the point, and the vector perpendicular to the plane defined by the normal and the rotation axis.
- the denominator normalizes the result in the range [0..1], so point selection is independent from the distance to the sensor.
- the inventors initially consider the bins related to translations described above. Then they evaluate if rotations are properly defined over all the axes. If this is not the case for a given axis, they add a bin containing the points that better constrain such rotation, i.e. points with largest d i values.
- Figure 7 shows the results after the proposed inlier selection strategy. Notice how all independently computed transformations are distributed around a well defined central position. Also notice that, after each iteration of outlier removal, the distributions quickly converge to the final estimated transformation, when considering all the correspondences marked as inliers. [00183] Odometer integration
- the inventors preferably combine their proposed sensor tracking component with an odometer.
- s corresponds to the voxel cell size and compensates the different resolution between the voxelized ground truth map and the non-discretized kd-tree of the previously fixed cloud.
- Main benefits are that (a) surfaces missing in the reference map can be exploited during the registration process and that (b) the system allows exploring non- mapped areas by continuously tracking the user.
- the pose tracking component has been evaluated by isolating each one of its components and generating individual results (map representation, point selection, inlier selection and odometer integration) and by measuring the overall accuracy and performance of the complete system.
- Table V shows the memory footprint of each dataset (fourth column), considering the voxel size (second column) and the dimensions of the complete voxel structure (third column). Notice that for the industrial building (c), two cases are considered: one that extends the original map by adding information about the exterior, (c)-o, and the original map where only the interior is stored (c)-i.
- Table V Map sizes for the different datasets.
- Table VI compares nearest neighbour searching times of the proposed map representation w.r.t. a standard kd-tree. For this test, both structures contained the same number of points and queries were performed using the same data. Results in columns 2 and 3 are expressed in nanoseconds per point and represent the average time considering all queries. Column 4 shows the average nearest neighbour error of the proposed map representation, due to the discretization of the space. Column 5 shows the total number of points in the map.
- Table VI Map average nearest neighbour computation times and errors for the proposed map representation compared with a standard kd-tree implementation. map voxel kd-tree error size
- the inventors proceeded as follows: the inventors mounted a Velodyne HDL-32E sensor on a tripod without moving it. The first frame was used as reference model and, during the rest of the experiment, outliers were progressively added (e.g. , people were moving around and objects moved). This way, they could classify inliers correspondences by evaluating the distance between each point and its nearest neighbour in the reference frame.
- Figure 4 shows the final precision of the present inlier selection strategy w.r.t. the number of outliers in the input cloud. Notice how, when the total number of outliers is below 20% , the present precision is almost always 100% (no wrong correspondences are selected for registration). As the overall number of outliers increases, precision decreases. On average, the proposed experiment had 27.83% wrong correspondences, that lead to a precision of 98.97%.
- Figure 14 shows the average point-plane distances when moving inside an outlier-free scenario for both, the classical point-plane ICP algorithm and for the present robust ICP.
- the absence of outliers was ensured by performing the acquisitions immediately after scanning the ground truth model, represented using a voxel cell size of 10. Residuals for both approaches are almost identical, peaking on 2, which is within the nominal accuracy of the Velodyne HDL-32E sensor.
- Figure 14 shows significant differences when changes are introduced into the environment.
- the track was recorded after refurbishing the environment.
- the present robust ICP implementation provides much better results than using the classical point-plane ICP, due to the efficient selection of inlier correspondences.
- residuals peak in 3 due to the interference of the outliers in the point-plane distance estimation for computing the shown histogram.
- Figure 15 shows the execution time spent in registering each cloud and computing the new pose of the sensor for a walking setup scenario. This process takes normally between 20 and 30ms but, at some frames, a peak around 100ms is observed. These peaks are related to the kd-tree generation for the odometer, which is triggered when a fixed distance is travelled since the time of the last update of the tree. The faster the sensor moves, the more this event will affect the overall performance. To avoid frame dropping, the kd-tree generation and the odometer runs in different threads and the latter uses the last available kd-tree until the new one is ready.
- the present invention presents a complete system with preferred embodiments to assist in indoor localization applications that provides real-time results and scales well to the map size, allowing the exploration of very large environments.
- Pose tracking is performed using an efficent map representation, that provides constant nearest neighbour searching times, and that keeps memory requirements low.
- the present registration algorithm provides robust results by (1 ) selecting points that ensure geometric stability, (2) efficiently discarding outliers and (3) being fused with a local odometer whic allows using points not present in the reference map for registration and navigating through non-mapped areas.
Abstract
Description
Claims
Priority Applications (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA2982044A CA2982044C (en) | 2015-04-10 | 2016-04-11 | Method and device for real-time mapping and localization |
AU2016246024A AU2016246024B2 (en) | 2015-04-10 | 2016-04-11 | Method and device for real-time mapping and localization |
JP2017567094A JP7270338B2 (en) | 2015-04-10 | 2016-04-11 | Method and apparatus for real-time mapping and localization |
KR1020177032418A KR102427921B1 (en) | 2015-04-10 | 2016-04-11 | Apparatus and method for real-time mapping and localization |
CN201680034057.0A CN107709928B (en) | 2015-04-10 | 2016-04-11 | Method and device for real-time mapping and positioning |
ES16715548T ES2855119T3 (en) | 2015-04-10 | 2016-04-11 | Method and device for real-time mapping and localization |
EP16715548.0A EP3280977B1 (en) | 2015-04-10 | 2016-04-11 | Method and device for real-time mapping and localization |
US15/729,469 US10304237B2 (en) | 2015-04-10 | 2017-10-10 | Method and device for real-time mapping and localization |
HK18110512.8A HK1251294A1 (en) | 2015-04-10 | 2018-08-15 | Method and device for real-time mapping and localization |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP15163231.2 | 2015-04-10 | ||
EP15163231.2A EP3078935A1 (en) | 2015-04-10 | 2015-04-10 | Method and device for real-time mapping and localization |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/729,469 Continuation US10304237B2 (en) | 2015-04-10 | 2017-10-10 | Method and device for real-time mapping and localization |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016162568A1 true WO2016162568A1 (en) | 2016-10-13 |
Family
ID=52823552
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2016/057935 WO2016162568A1 (en) | 2015-04-10 | 2016-04-11 | Method and device for real-time mapping and localization |
Country Status (10)
Country | Link |
---|---|
US (1) | US10304237B2 (en) |
EP (2) | EP3078935A1 (en) |
JP (2) | JP7270338B2 (en) |
KR (1) | KR102427921B1 (en) |
CN (1) | CN107709928B (en) |
AU (1) | AU2016246024B2 (en) |
CA (1) | CA2982044C (en) |
ES (1) | ES2855119T3 (en) |
HK (1) | HK1251294A1 (en) |
WO (1) | WO2016162568A1 (en) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107943038A (en) * | 2017-11-28 | 2018-04-20 | 广东工业大学 | A kind of mobile robot embedded laser SLAM method and system |
CN108242079A (en) * | 2017-12-30 | 2018-07-03 | 北京工业大学 | A kind of VSLAM methods based on multiple features visual odometry and figure Optimized model |
WO2019055691A1 (en) * | 2017-09-13 | 2019-03-21 | Velodyne Lidar, Inc. | Multiple resolution, simultaneous localization and mapping based on 3-d lidar measurements |
CN109541630A (en) * | 2018-11-22 | 2019-03-29 | 武汉科技大学 | A method of it is surveyed and drawn suitable for Indoor environment plane 2D SLAM |
CN109696168A (en) * | 2019-02-27 | 2019-04-30 | 沈阳建筑大学 | A kind of mobile robot geometry map creating method based on laser sensor |
WO2019099802A1 (en) | 2017-11-17 | 2019-05-23 | DeepMap Inc. | Iterative closest point process based on lidar with integrated motion estimation for high definitions maps |
US10304237B2 (en) | 2015-04-10 | 2019-05-28 | The European Atomic Energy Community (Euratom), Represented By The European Commission | Method and device for real-time mapping and localization |
WO2019136714A1 (en) * | 2018-01-12 | 2019-07-18 | 浙江国自机器人技术有限公司 | 3d laser-based map building method and system |
US10397739B2 (en) | 2017-03-03 | 2019-08-27 | Here Global B.V. | Supporting the creation of a radio map |
CN110333513A (en) * | 2019-07-10 | 2019-10-15 | 国网四川省电力公司电力科学研究院 | A kind of particle filter SLAM method merging least square method |
WO2020005826A1 (en) * | 2018-06-25 | 2020-01-02 | Walmart Apollo, Llc | System and method for map localization with camera perspectives |
CN110764096A (en) * | 2019-09-24 | 2020-02-07 | 浙江华消科技有限公司 | Three-dimensional map construction method for disaster area, robot and robot control method |
CN111190191A (en) * | 2019-12-11 | 2020-05-22 | 杭州电子科技大学 | Scanning matching method based on laser SLAM |
CN111583369A (en) * | 2020-04-21 | 2020-08-25 | 天津大学 | Laser SLAM method based on facial line angular point feature extraction |
CN111813111A (en) * | 2020-06-29 | 2020-10-23 | 佛山科学技术学院 | Multi-robot cooperative working method |
USRE48491E1 (en) | 2006-07-13 | 2021-03-30 | Velodyne Lidar Usa, Inc. | High definition lidar system |
US10983218B2 (en) | 2016-06-01 | 2021-04-20 | Velodyne Lidar Usa, Inc. | Multiple pixel scanning LIDAR |
US11073617B2 (en) | 2016-03-19 | 2021-07-27 | Velodyne Lidar Usa, Inc. | Integrated illumination and detection for LIDAR based 3-D imaging |
US11082010B2 (en) | 2018-11-06 | 2021-08-03 | Velodyne Lidar Usa, Inc. | Systems and methods for TIA base current detection and compensation |
US11137480B2 (en) | 2016-01-31 | 2021-10-05 | Velodyne Lidar Usa, Inc. | Multiple pulse, LIDAR based 3-D imaging |
EP3893074A1 (en) | 2020-04-06 | 2021-10-13 | General Electric Company | Localization method for mobile remote inspection and/or manipulation tools in confined spaces and associated system |
WO2022052881A1 (en) * | 2020-09-14 | 2022-03-17 | 华为技术有限公司 | Map construction method and computing device |
US11608097B2 (en) | 2017-02-28 | 2023-03-21 | Thales Canada Inc | Guideway mounted vehicle localization system |
US11703569B2 (en) | 2017-05-08 | 2023-07-18 | Velodyne Lidar Usa, Inc. | LIDAR data acquisition and control |
US11796648B2 (en) | 2018-09-18 | 2023-10-24 | Velodyne Lidar Usa, Inc. | Multi-channel lidar illumination driver |
US11808891B2 (en) | 2017-03-31 | 2023-11-07 | Velodyne Lidar Usa, Inc. | Integrated LIDAR illumination power control |
WO2024013108A1 (en) * | 2022-07-11 | 2024-01-18 | Cambridge Enterprise Limited | 3d scene capture system |
US11885958B2 (en) | 2019-01-07 | 2024-01-30 | Velodyne Lidar Usa, Inc. | Systems and methods for a dual axis resonant scanning mirror |
US11933967B2 (en) | 2019-08-22 | 2024-03-19 | Red Creamery, LLC | Distally actuated scanning mirror |
Families Citing this family (202)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11835343B1 (en) | 2004-08-06 | 2023-12-05 | AI Incorporated | Method for constructing a map while performing work |
US11243309B2 (en) | 2015-02-16 | 2022-02-08 | Northwest Instrument Inc. | Ranging system and ranging method |
CN105511742B (en) * | 2016-01-07 | 2021-10-01 | 美国西北仪器公司 | Intelligent interactive interface |
US9652896B1 (en) * | 2015-10-30 | 2017-05-16 | Snap Inc. | Image based tracking in augmented reality systems |
US9984499B1 (en) | 2015-11-30 | 2018-05-29 | Snap Inc. | Image and point cloud based tracking and in augmented reality systems |
US11556170B2 (en) | 2016-01-07 | 2023-01-17 | Northwest Instrument Inc. | Intelligent interface based on augmented reality |
AT517701B1 (en) * | 2016-04-15 | 2017-04-15 | Riegl Laser Measurement Systems Gmbh | laser scanner |
US10802147B2 (en) | 2016-05-18 | 2020-10-13 | Google Llc | System and method for concurrent odometry and mapping |
US10890600B2 (en) | 2016-05-18 | 2021-01-12 | Google Llc | Real-time visual-inertial motion tracking fault detection |
US11017610B2 (en) * | 2016-05-18 | 2021-05-25 | Google Llc | System and method for fault detection and recovery for concurrent odometry and mapping |
JP2018004401A (en) * | 2016-06-30 | 2018-01-11 | 株式会社トプコン | Laser scanner and laser scanner system, and registration method for dot group data |
US10565786B1 (en) * | 2016-06-30 | 2020-02-18 | Google Llc | Sensor placement interface |
US10282854B2 (en) * | 2016-10-12 | 2019-05-07 | Faro Technologies, Inc. | Two-dimensional mapping system and method of operation |
US20180100927A1 (en) * | 2016-10-12 | 2018-04-12 | Faro Technologies, Inc. | Two-dimensional mapping system and method of operation |
CN106548486B (en) * | 2016-11-01 | 2024-02-27 | 浙江大学 | Unmanned vehicle position tracking method based on sparse visual feature map |
CN108122216B (en) * | 2016-11-29 | 2019-12-10 | 京东方科技集团股份有限公司 | system and method for dynamic range extension of digital images |
US10309778B2 (en) * | 2016-12-30 | 2019-06-04 | DeepMap Inc. | Visual odometry and pairwise alignment for determining a position of an autonomous vehicle |
US10319149B1 (en) | 2017-02-17 | 2019-06-11 | Snap Inc. | Augmented reality anamorphosis system |
US10074381B1 (en) | 2017-02-20 | 2018-09-11 | Snap Inc. | Augmented reality speech balloon system |
US11151447B1 (en) * | 2017-03-13 | 2021-10-19 | Zoox, Inc. | Network training process for hardware definition |
US10444759B2 (en) | 2017-06-14 | 2019-10-15 | Zoox, Inc. | Voxel based ground plane estimation and object segmentation |
GB201718507D0 (en) * | 2017-07-31 | 2017-12-27 | Univ Oxford Innovation Ltd | A method of constructing a model of the motion of a mobile device and related systems |
US10983199B2 (en) | 2017-08-11 | 2021-04-20 | Zoox, Inc. | Vehicle sensor calibration and localization |
JP7196156B2 (en) * | 2017-08-11 | 2022-12-26 | ズークス インコーポレイテッド | Vehicle sensor calibration and localization |
US11175132B2 (en) | 2017-08-11 | 2021-11-16 | Zoox, Inc. | Sensor perturbation |
US10484659B2 (en) * | 2017-08-31 | 2019-11-19 | Disney Enterprises, Inc. | Large-scale environmental mapping in real-time by a robotic system |
DE102017122440A1 (en) * | 2017-09-27 | 2019-03-28 | Valeo Schalter Und Sensoren Gmbh | A method for locating and further developing a digital map by a motor vehicle; localization device |
JPWO2019065546A1 (en) * | 2017-09-29 | 2020-10-22 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | 3D data creation method, client device and server |
FR3072181B1 (en) * | 2017-10-06 | 2019-11-01 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | METHOD FOR LOCATING MOBILE DEVICES IN A COMMON MARK |
US11274929B1 (en) * | 2017-10-17 | 2022-03-15 | AI Incorporated | Method for constructing a map while performing work |
US10422648B2 (en) * | 2017-10-17 | 2019-09-24 | AI Incorporated | Methods for finding the perimeter of a place using observed coordinates |
US10535138B2 (en) | 2017-11-21 | 2020-01-14 | Zoox, Inc. | Sensor data segmentation |
US10365656B2 (en) * | 2017-11-22 | 2019-07-30 | Locus Robotics Corp. | Robot charger docking localization |
US11360216B2 (en) * | 2017-11-29 | 2022-06-14 | VoxelMaps Inc. | Method and system for positioning of autonomously operating entities |
FR3075834B1 (en) * | 2017-12-21 | 2021-09-24 | Soletanche Freyssinet | SOIL COMPACTION PROCESS USING A LASER SCANNER |
US11157527B2 (en) | 2018-02-20 | 2021-10-26 | Zoox, Inc. | Creating clean maps including semantic information |
US10586344B2 (en) * | 2018-02-21 | 2020-03-10 | Beijing Jingdong Shangke Information Technology Co., Ltd. | System and method for feature screening in SLAM |
CN110363834B (en) * | 2018-04-10 | 2023-05-30 | 北京京东尚科信息技术有限公司 | Point cloud data segmentation method and device |
CN108828643B (en) * | 2018-04-25 | 2022-04-29 | 长安大学 | Indoor and outdoor seamless positioning system and method based on grey prediction model |
CN108801265A (en) * | 2018-06-08 | 2018-11-13 | 武汉大学 | Multidimensional information synchronous acquisition, positioning and position service apparatus and system and method |
EP3610225B1 (en) * | 2018-06-22 | 2022-03-02 | Beijing Didi Infinity Technology and Development Co., Ltd. | Systems and methods for updating highly automated driving maps |
AU2018278849B2 (en) | 2018-07-02 | 2020-11-05 | Beijing Didi Infinity Technology And Development Co., Ltd. | Vehicle navigation system using pose estimation based on point cloud |
US20210141082A1 (en) * | 2018-07-02 | 2021-05-13 | Vayyar Imaging Ltd. | System and methods for environment mapping |
CN109060820B (en) * | 2018-07-10 | 2021-04-13 | 深圳大学 | Tunnel disease detection method and tunnel disease detection device based on laser detection |
WO2020014951A1 (en) * | 2018-07-20 | 2020-01-23 | 深圳市道通智能航空技术有限公司 | Method and apparatus for building local obstacle map, and unmanned aerial vehicle |
CN110763243B (en) * | 2018-07-27 | 2021-08-24 | 杭州海康威视数字技术股份有限公司 | Sliding map updating method and device |
CN109146976B (en) * | 2018-08-23 | 2020-06-02 | 百度在线网络技术(北京)有限公司 | Method and device for locating unmanned vehicles |
KR20200029785A (en) | 2018-09-11 | 2020-03-19 | 삼성전자주식회사 | Localization method and apparatus of displaying virtual object in augmented reality |
KR101925862B1 (en) * | 2018-09-13 | 2018-12-06 | 주식회사 에이엠오토노미 | Real-time 3d mapping using 3d-lidar |
EP3629290B1 (en) * | 2018-09-26 | 2023-01-04 | Apple Inc. | Localization for mobile devices |
EP3633404B1 (en) * | 2018-10-02 | 2022-09-07 | Ibeo Automotive Systems GmbH | Method and apparatus for optical distance measurements |
CN109544636B (en) * | 2018-10-10 | 2022-03-15 | 广州大学 | Rapid monocular vision odometer navigation positioning method integrating feature point method and direct method |
CN109358342B (en) * | 2018-10-12 | 2022-12-09 | 东北大学 | Three-dimensional laser SLAM system based on 2D laser radar and control method |
CN109243207A (en) * | 2018-10-17 | 2019-01-18 | 安徽工程大学 | A kind of mobile device used for SLAM teaching demonstration |
KR102627453B1 (en) | 2018-10-17 | 2024-01-19 | 삼성전자주식회사 | Method and device to estimate position |
US11704621B2 (en) * | 2018-10-18 | 2023-07-18 | Lg Electronics Inc. | Method of sensing loaded state and robot implementing thereof |
KR20200046437A (en) | 2018-10-24 | 2020-05-07 | 삼성전자주식회사 | Localization method based on images and map data and apparatus thereof |
SE542376C2 (en) * | 2018-10-25 | 2020-04-21 | Ireality Ab | Method and controller for tracking moving objects |
EP3793899A4 (en) | 2018-10-29 | 2021-07-07 | DJI Technology, Inc. | A movable object for performing real-time mapping |
CN109459734B (en) * | 2018-10-30 | 2020-09-11 | 百度在线网络技术(北京)有限公司 | Laser radar positioning effect evaluation method, device, equipment and storage medium |
CN109507677B (en) * | 2018-11-05 | 2020-08-18 | 浙江工业大学 | SLAM method combining GPS and radar odometer |
EP3669141A4 (en) | 2018-11-09 | 2020-12-02 | Beijing Didi Infinity Technology and Development Co., Ltd. | Vehicle positioning system using lidar |
CN109523595B (en) * | 2018-11-21 | 2023-07-18 | 南京链和科技有限公司 | Visual measurement method for linear angular spacing of building engineering |
CN111238465B (en) * | 2018-11-28 | 2022-02-18 | 台达电子工业股份有限公司 | Map building equipment and map building method thereof |
EP3660453B1 (en) * | 2018-11-30 | 2023-11-29 | Sandvik Mining and Construction Oy | Positioning of mobile object in underground worksite |
US10970547B2 (en) * | 2018-12-07 | 2021-04-06 | Microsoft Technology Licensing, Llc | Intelligent agents for managing data associated with three-dimensional objects |
JP7129894B2 (en) * | 2018-12-10 | 2022-09-02 | 株式会社タダノ | Ground surface estimation method, measurement area display system and crane |
CN109541634B (en) * | 2018-12-28 | 2023-01-17 | 歌尔股份有限公司 | Path planning method and device and mobile device |
CN109887028B (en) * | 2019-01-09 | 2023-02-03 | 天津大学 | Unmanned vehicle auxiliary positioning method based on point cloud data registration |
CN111488412B (en) * | 2019-01-28 | 2023-04-07 | 阿里巴巴集团控股有限公司 | Method and device for determining value of POI (Point of interest) acquisition area |
DK180562B1 (en) * | 2019-01-31 | 2021-06-28 | Motional Ad Llc | Merging data from multiple lidar devices |
US11486701B2 (en) | 2019-02-06 | 2022-11-01 | Faro Technologies, Inc. | System and method for performing a real-time wall detection |
WO2020166612A1 (en) * | 2019-02-12 | 2020-08-20 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | Three-dimensional data multiplexing method, three-dimensional data demultiplexing method, three-dimensional data multiplexing device, and three-dimensional data demultiplexing device |
CN111568305B (en) * | 2019-02-18 | 2023-02-17 | 北京奇虎科技有限公司 | Method and device for processing relocation of sweeping robot and electronic equipment |
CN109814572B (en) * | 2019-02-20 | 2022-02-01 | 广州市山丘智能科技有限公司 | Mobile robot positioning and mapping method and device, mobile robot and storage medium |
CN109614459B (en) * | 2019-03-06 | 2019-05-28 | 上海思岚科技有限公司 | Map structuring winding detection method and equipment applied to two-dimensional laser |
WO2020181418A1 (en) | 2019-03-08 | 2020-09-17 | SZ DJI Technology Co., Ltd. | Techniques for collaborative map construction between unmanned aerial vehicle and ground vehicle |
CN109839112B (en) * | 2019-03-11 | 2023-04-07 | 中南大学 | Underground operation equipment positioning method, device and system and storage medium |
CN109959937B (en) * | 2019-03-12 | 2021-07-27 | 广州高新兴机器人有限公司 | Laser radar-based positioning method for corridor environment, storage medium and electronic equipment |
CN109798842B (en) * | 2019-03-13 | 2023-09-08 | 武汉汉宁轨道交通技术有限公司 | Third rail detection device and detection method |
JP7245084B2 (en) * | 2019-03-15 | 2023-03-23 | 日立Astemo株式会社 | Autonomous driving system |
CN110008851B (en) * | 2019-03-15 | 2021-11-19 | 深兰科技(上海)有限公司 | Method and equipment for detecting lane line |
CN109848996B (en) * | 2019-03-19 | 2020-08-14 | 西安交通大学 | Large-scale three-dimensional environment map creation method based on graph optimization theory |
CN111739111B (en) * | 2019-03-20 | 2023-05-30 | 上海交通大学 | Point cloud projection coding intra-block offset optimization method and system |
CN113455007B (en) | 2019-03-22 | 2023-12-22 | 腾讯美国有限责任公司 | Method and device for encoding and decoding inter-frame point cloud attribute |
CN111735439B (en) * | 2019-03-22 | 2022-09-30 | 北京京东乾石科技有限公司 | Map construction method, map construction device and computer-readable storage medium |
DE102019204410A1 (en) * | 2019-03-29 | 2020-10-01 | Robert Bosch Gmbh | Method for the simultaneous localization and mapping of a mobile robot |
US11074706B2 (en) * | 2019-04-12 | 2021-07-27 | Intel Corporation | Accommodating depth noise in visual slam using map-point consensus |
KR102195164B1 (en) * | 2019-04-29 | 2020-12-24 | 충북대학교 산학협력단 | System and method for multiple object detection using multi-LiDAR |
US11227401B1 (en) * | 2019-05-22 | 2022-01-18 | Zoox, Inc. | Multiresolution voxel space |
CN111982133B (en) * | 2019-05-23 | 2023-01-31 | 北京地平线机器人技术研发有限公司 | Method and device for positioning vehicle based on high-precision map and electronic equipment |
CN110309472B (en) * | 2019-06-03 | 2022-04-29 | 清华大学 | Offline data-based policy evaluation method and device |
DE102019208504A1 (en) * | 2019-06-12 | 2020-12-17 | Robert Bosch Gmbh | Position determination based on environmental observations |
KR102246236B1 (en) * | 2019-06-14 | 2021-04-28 | 엘지전자 주식회사 | How to update a map in Fusion Slam and a robot that implements it |
US11080870B2 (en) * | 2019-06-19 | 2021-08-03 | Faro Technologies, Inc. | Method and apparatus for registering three-dimensional point clouds |
EP3758351B1 (en) * | 2019-06-26 | 2023-10-11 | Faro Technologies, Inc. | A system and method of scanning an environment using multiple scanners concurrently |
CN112148817B (en) * | 2019-06-28 | 2023-09-29 | 理光软件研究所(北京)有限公司 | SLAM optimization method, device and system based on panorama |
US11790542B2 (en) * | 2019-07-29 | 2023-10-17 | Board Of Trustees Of Michigan State University | Mapping and localization system for autonomous vehicles |
WO2021033574A1 (en) * | 2019-08-21 | 2021-02-25 | ソニー株式会社 | Information processing device, information processing method, and program |
CN110736456B (en) * | 2019-08-26 | 2023-05-05 | 广东亿嘉和科技有限公司 | Two-dimensional laser real-time positioning method based on feature extraction in sparse environment |
CN110307838B (en) * | 2019-08-26 | 2019-12-10 | 深圳市优必选科技股份有限公司 | Robot repositioning method and device, computer-readable storage medium and robot |
US11238641B2 (en) | 2019-09-27 | 2022-02-01 | Intel Corporation | Architecture for contextual memories in map representation for 3D reconstruction and navigation |
DE102019126631A1 (en) * | 2019-10-02 | 2021-04-08 | Valeo Schalter Und Sensoren Gmbh | Improved trajectory estimation based on ground truth |
CN110991227B (en) * | 2019-10-23 | 2023-06-30 | 东北大学 | Three-dimensional object identification and positioning method based on depth type residual error network |
CN112334946A (en) * | 2019-11-05 | 2021-02-05 | 深圳市大疆创新科技有限公司 | Data processing method, data processing device, radar, equipment and storage medium |
EP3819674A1 (en) | 2019-11-08 | 2021-05-12 | Outsight | Rolling environment sensing and gps optimization |
EP3819672A1 (en) | 2019-11-08 | 2021-05-12 | Outsight | Method and system to share scene maps |
EP3819667A1 (en) | 2019-11-08 | 2021-05-12 | Outsight | Radar and lidar combined mapping system |
CN110887487B (en) * | 2019-11-14 | 2023-04-18 | 天津大学 | Indoor synchronous positioning and mapping method |
CN110851556B (en) * | 2019-11-20 | 2023-02-17 | 苏州博众智能机器人有限公司 | Mobile robot mapping method, device, equipment and storage medium |
CN110909105B (en) * | 2019-11-25 | 2022-08-19 | 上海有个机器人有限公司 | Robot map construction method and system |
KR20210065728A (en) * | 2019-11-27 | 2021-06-04 | 재단법인 지능형자동차부품진흥원 | Position recognition method and device based on sparse point group using low channel 3D lidar sensor |
KR102255978B1 (en) * | 2019-12-02 | 2021-05-25 | 한국건설기술연구원 | Apparatus and method for generating tunnel internal precise map based on tunnel internal object detection using 3D sensor |
CN110849374B (en) * | 2019-12-03 | 2023-04-18 | 中南大学 | Underground environment positioning method, device, equipment and storage medium |
CN111079826B (en) * | 2019-12-13 | 2023-09-29 | 武汉科技大学 | Construction progress real-time identification method integrating SLAM and image processing |
CN110986956B (en) * | 2019-12-23 | 2023-06-30 | 苏州寻迹智行机器人技术有限公司 | Autonomous learning global positioning method based on improved Monte Carlo algorithm |
US11774593B2 (en) * | 2019-12-27 | 2023-10-03 | Automotive Research & Testing Center | Method of simultaneous localization and mapping |
CN111024062B (en) * | 2019-12-31 | 2022-03-29 | 芜湖哈特机器人产业技术研究院有限公司 | Drawing system based on pseudo GNSS and INS |
CN111220967B (en) * | 2020-01-02 | 2021-12-10 | 小狗电器互联网科技(北京)股份有限公司 | Method and device for detecting data validity of laser radar |
CN111141290B (en) * | 2020-01-08 | 2023-09-26 | 广州视源电子科技股份有限公司 | Positioning method, positioning device, equipment and storage medium of robot |
RU2764708C1 (en) | 2020-01-20 | 2022-01-19 | Общество с ограниченной ответственностью "Яндекс Беспилотные Технологии" | Methods and systems for processing lidar sensor data |
US11860315B2 (en) | 2020-01-20 | 2024-01-02 | Direct Cursus Technology L.L.C | Methods and systems for processing LIDAR sensor data |
EP4096901A4 (en) * | 2020-01-31 | 2023-10-11 | Hewlett-Packard Development Company, L.P. | Model prediction |
CN111325796B (en) * | 2020-02-28 | 2023-08-18 | 北京百度网讯科技有限公司 | Method and apparatus for determining pose of vision equipment |
US11466992B2 (en) | 2020-03-02 | 2022-10-11 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method, apparatus, device and medium for detecting environmental change |
KR102444879B1 (en) * | 2020-03-05 | 2022-09-19 | 고려대학교 산학협력단 | Method of three dimensional modeling for indoor space using Lidar |
EP3882649B1 (en) * | 2020-03-20 | 2023-10-25 | ABB Schweiz AG | Position estimation for vehicles based on virtual sensor response |
CN111462029B (en) * | 2020-03-27 | 2023-03-03 | 阿波罗智能技术(北京)有限公司 | Visual point cloud and high-precision map fusion method and device and electronic equipment |
WO2021207999A1 (en) * | 2020-04-16 | 2021-10-21 | 华为技术有限公司 | Vehicle positioning method and apparatus, and positioning map layer generation method and apparatus |
CN111210475B (en) * | 2020-04-21 | 2020-07-14 | 浙江欣奕华智能科技有限公司 | Map updating method and device |
US11900547B2 (en) * | 2020-04-29 | 2024-02-13 | Magic Leap, Inc. | Cross reality system for large scale environments |
CN111536871B (en) * | 2020-05-07 | 2022-05-31 | 武汉大势智慧科技有限公司 | Accurate calculation method for volume variation of multi-temporal photogrammetric data |
CN111678516B (en) * | 2020-05-08 | 2021-11-23 | 中山大学 | Bounded region rapid global positioning method based on laser radar |
KR102275682B1 (en) * | 2020-05-08 | 2021-07-12 | 한국건설기술연구원 | SLAM-based mobile scan backpack system for rapid real-time building scanning |
CN111815684B (en) * | 2020-06-12 | 2022-08-02 | 武汉中海庭数据技术有限公司 | Space multivariate feature registration optimization method and device based on unified residual error model |
CN111795687B (en) * | 2020-06-29 | 2022-08-05 | 深圳市优必选科技股份有限公司 | Robot map updating method and device, readable storage medium and robot |
US10877948B1 (en) * | 2020-07-01 | 2020-12-29 | Tamr, Inc. | Method and computer program product for geospatial binning |
CN112102376B (en) * | 2020-08-04 | 2023-06-06 | 广东工业大学 | Multi-view cloud registration method, device and storage medium of hybrid sparse ICP |
KR102455546B1 (en) * | 2020-09-25 | 2022-10-18 | 인천대학교 산학협력단 | Iterative Closest Point Algorithms for Reconstructing of Colored Point Cloud |
WO2022074083A1 (en) | 2020-10-06 | 2022-04-14 | Zoller & Fröhlich GmbH | Mobile scanning arrangement and method for controlling a mobile scanning arrangement |
US20220113384A1 (en) * | 2020-10-08 | 2022-04-14 | Intelligrated Headquarters, Llc | LiDAR BASED MONITORING IN MATERIAL HANDLING ENVIRONMENT |
CN113126621A (en) * | 2020-10-14 | 2021-07-16 | 中国安全生产科学研究院 | Automatic navigation method of subway carriage disinfection robot |
DE102020129293B4 (en) | 2020-11-06 | 2022-06-09 | Trumpf Werkzeugmaschinen Gmbh + Co. Kg | Method of compiling a cutting plan, method of cutting out and sorting workpieces and flat bed machine tool |
DE102020214002B3 (en) | 2020-11-08 | 2022-04-28 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung eingetragener Verein | Device and method for determining a position of a detection unit and method for storing extraction information in a database |
CN112380312B (en) * | 2020-11-30 | 2022-08-05 | 北京智行者科技股份有限公司 | Laser map updating method based on grid detection, terminal and computer equipment |
CN112698345B (en) * | 2020-12-04 | 2024-01-30 | 江苏科技大学 | Laser radar robot simultaneous positioning and map building optimization method |
CN112612788B (en) * | 2020-12-11 | 2024-03-01 | 中国北方车辆研究所 | Autonomous positioning method under navigation-free satellite signal |
CN112711012B (en) * | 2020-12-18 | 2022-10-11 | 上海蔚建科技有限公司 | Global position initialization method and system of laser radar positioning system |
WO2022136479A1 (en) | 2020-12-21 | 2022-06-30 | Zoller & Fröhlich GmbH | Platform for a mobile scanning assembly, and mobile scanning assembly |
CN112578399A (en) * | 2020-12-23 | 2021-03-30 | 雷熵信息科技(潍坊)有限公司 | Method for eliminating characteristic points in three-dimensional reconstruction of laser radar without repeated scanning |
CN112598737A (en) * | 2020-12-24 | 2021-04-02 | 中建科技集团有限公司 | Indoor robot positioning method and device, terminal equipment and storage medium |
CN112596070B (en) * | 2020-12-29 | 2024-04-19 | 四叶草(苏州)智能科技有限公司 | Robot positioning method based on laser and vision fusion |
CN112859847B (en) * | 2021-01-06 | 2022-04-01 | 大连理工大学 | Multi-robot collaborative path planning method under traffic direction limitation |
CN112634451B (en) * | 2021-01-11 | 2022-08-23 | 福州大学 | Outdoor large-scene three-dimensional mapping method integrating multiple sensors |
CN112732854B (en) * | 2021-01-11 | 2023-03-31 | 哈尔滨工程大学 | Particle filtering BSLAM method |
CN112720532B (en) * | 2021-01-12 | 2022-06-28 | 中国煤炭科工集团太原研究院有限公司 | Machine crowd is strutted to stable intelligent monitoring of country rock and precision |
US20240077329A1 (en) * | 2021-01-12 | 2024-03-07 | Sandvik Mining And Construction Oy | Underground worksite model generation |
CN112882056B (en) * | 2021-01-15 | 2024-04-09 | 西安理工大学 | Mobile robot synchronous positioning and map construction method based on laser radar |
US20240062403A1 (en) * | 2021-02-12 | 2024-02-22 | Magic Leap, Inc. | Lidar simultaneous localization and mapping |
US20220282991A1 (en) * | 2021-03-02 | 2022-09-08 | Yujin Robot Co., Ltd. | Region segmentation apparatus and method for map decomposition of robot |
CN112977443B (en) * | 2021-03-23 | 2022-03-11 | 中国矿业大学 | Path planning method for underground unmanned trackless rubber-tyred vehicle |
CN113192111B (en) * | 2021-03-24 | 2024-02-02 | 西北大学 | Three-dimensional point cloud similarity registration method |
CN113066105B (en) * | 2021-04-02 | 2022-10-21 | 北京理工大学 | Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit |
CN113032885B (en) * | 2021-04-07 | 2023-04-28 | 深圳大学 | Vision relation analysis method, device and computer storage medium |
CN113252042B (en) * | 2021-05-18 | 2023-06-13 | 杭州迦智科技有限公司 | Positioning method and device based on laser and UWB in tunnel |
CN113192197B (en) * | 2021-05-24 | 2024-04-05 | 北京京东乾石科技有限公司 | Global point cloud map construction method, device, equipment and storage medium |
CN113340304B (en) * | 2021-06-03 | 2023-02-17 | 青岛慧拓智能机器有限公司 | Gradient extraction method and device |
CN113255043B (en) * | 2021-06-07 | 2022-05-06 | 湖南蓝海购企业策划有限公司 | Building exhibition hall three-dimensional building model building accuracy analysis method |
CN113486927B (en) * | 2021-06-15 | 2024-03-01 | 北京大学 | Priori probability-based unsupervised track access place labeling method |
CN113536232B (en) * | 2021-06-28 | 2023-03-21 | 上海科技大学 | Normal distribution transformation method for laser point cloud positioning in unmanned driving |
CN113379915B (en) * | 2021-07-05 | 2022-12-23 | 广东工业大学 | Driving scene construction method based on point cloud fusion |
CN113485347B (en) * | 2021-07-16 | 2023-11-21 | 上海探寻信息技术有限公司 | Motion trail optimization method and system |
CN113506374B (en) * | 2021-07-16 | 2022-12-02 | 西安电子科技大学 | Point cloud registration method based on GPS information assistance and space grid division |
US11544419B1 (en) | 2021-07-26 | 2023-01-03 | Pointlab, Inc. | Subsampling method for converting 3D scan data of an object for marine, civil, and architectural works into smaller densities for processing without CAD processing |
CN113758491B (en) * | 2021-08-05 | 2024-02-23 | 重庆长安汽车股份有限公司 | Relative positioning method and system based on multi-sensor fusion unmanned vehicle and vehicle |
CN113760908B (en) * | 2021-08-23 | 2023-07-07 | 北京易航远智科技有限公司 | Pose mapping method between different maps under set path based on time sequence |
CN113763551B (en) * | 2021-09-08 | 2023-10-27 | 北京易航远智科技有限公司 | Rapid repositioning method for large-scale map building scene based on point cloud |
CN113777645B (en) * | 2021-09-10 | 2024-01-09 | 北京航空航天大学 | High-precision pose estimation algorithm under GPS refused environment |
CN113865580B (en) * | 2021-09-15 | 2024-03-22 | 北京易航远智科技有限公司 | Method and device for constructing map, electronic equipment and computer readable storage medium |
CN113916232B (en) * | 2021-10-18 | 2023-10-13 | 济南大学 | Map construction method and system for improving map optimization |
CN114018248B (en) * | 2021-10-29 | 2023-08-04 | 同济大学 | Mileage metering method and image building method integrating code wheel and laser radar |
CN114563797A (en) * | 2021-11-04 | 2022-05-31 | 博歌科技有限公司 | Automatic map exploration system and method |
CN114047766B (en) * | 2021-11-22 | 2023-11-21 | 上海交通大学 | Mobile robot data acquisition system and method for long-term application of indoor and outdoor scenes |
CN114170280B (en) * | 2021-12-09 | 2023-11-28 | 北京能创科技有限公司 | Laser odometer method, system and device based on double windows |
CN114353799B (en) * | 2021-12-30 | 2023-09-05 | 武汉大学 | Indoor rapid global positioning method for unmanned platform carrying multi-line laser radar |
CN114355981B (en) * | 2022-01-06 | 2024-01-12 | 中山大学 | Method and system for autonomous exploration and mapping of four-rotor unmanned aerial vehicle |
CN114413881B (en) * | 2022-01-07 | 2023-09-01 | 中国第一汽车股份有限公司 | Construction method, device and storage medium of high-precision vector map |
CN114355383B (en) * | 2022-01-20 | 2024-04-12 | 合肥工业大学 | Positioning navigation method combining laser SLAM and laser reflecting plate |
CN114526745B (en) * | 2022-02-18 | 2024-04-12 | 太原市威格传世汽车科技有限责任公司 | Drawing construction method and system for tightly coupled laser radar and inertial odometer |
CN114886345B (en) * | 2022-04-20 | 2023-12-15 | 青岛海尔空调器有限总公司 | Method, device, system and storage medium for controlling sweeping robot |
CN115082532B (en) * | 2022-05-12 | 2024-03-29 | 华南理工大学 | Ship collision prevention method for river-crossing transmission line based on laser radar |
CN115031718B (en) * | 2022-05-25 | 2023-10-31 | 合肥恒淏智能科技合伙企业(有限合伙) | Multi-sensor fused unmanned ship synchronous positioning and mapping method (SLAM) and system |
CN114675088B (en) * | 2022-05-27 | 2022-08-23 | 浙江大学 | Unsupervised learning radiation source rapid near-field scanning method |
CN114964212B (en) * | 2022-06-02 | 2023-04-18 | 广东工业大学 | Multi-machine collaborative fusion positioning and mapping method oriented to unknown space exploration |
CN114898066B (en) * | 2022-06-14 | 2022-12-30 | 中国电子工程设计院有限公司 | Non-convex indoor space identification reconstruction method and system |
CN114897942B (en) * | 2022-07-15 | 2022-10-28 | 深圳元戎启行科技有限公司 | Point cloud map generation method and device and related storage medium |
CN115359203B (en) * | 2022-09-21 | 2023-06-27 | 智城数创(西安)科技有限公司 | Three-dimensional high-precision map generation method, system and cloud platform |
CN115615345B (en) * | 2022-12-16 | 2023-04-07 | 中南大学 | Ground surface deformation monitoring method based on photogrammetry color point cloud registration |
CN116600308B (en) * | 2023-07-13 | 2023-10-03 | 北京理工大学 | Wireless communication transmission and space mapping method applied to underground space |
CN117710469B (en) * | 2024-02-06 | 2024-04-12 | 四川大学 | Online dense reconstruction method and system based on RGB-D sensor |
CN117761717A (en) * | 2024-02-21 | 2024-03-26 | 天津大学四川创新研究院 | Automatic loop three-dimensional reconstruction system and operation method |
CN117761624A (en) * | 2024-02-22 | 2024-03-26 | 广州市大湾区虚拟现实研究院 | passive movable object tracking and positioning method based on laser coding positioning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140005933A1 (en) * | 2011-09-30 | 2014-01-02 | Evolution Robotics, Inc. | Adaptive Mapping with Spatial Summaries of Sensor Data |
WO2015017941A1 (en) * | 2013-08-09 | 2015-02-12 | Sweep3D Corporation | Systems and methods for generating data indicative of a three-dimensional representation of a scene |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1815950A1 (en) * | 2006-02-03 | 2007-08-08 | The European Atomic Energy Community (EURATOM), represented by the European Commission | Robotic surgical system for performing minimally invasive medical procedures |
KR101461185B1 (en) * | 2007-11-09 | 2014-11-14 | 삼성전자 주식회사 | Apparatus and method for building 3D map using structured light |
US20110109633A1 (en) * | 2009-11-12 | 2011-05-12 | Sequeira Jr Jose J | System and Method For Visualizing Data Corresponding To Physical Objects |
EP2385483B1 (en) * | 2010-05-07 | 2012-11-21 | MVTec Software GmbH | Recognition and pose determination of 3D objects in 3D scenes using geometric point pair descriptors and the generalized Hough Transform |
JP5803043B2 (en) * | 2010-05-20 | 2015-11-04 | アイロボット コーポレイション | Mobile robot system and method for operating a mobile robot |
US9014848B2 (en) * | 2010-05-20 | 2015-04-21 | Irobot Corporation | Mobile robot system |
WO2012040644A1 (en) * | 2010-09-24 | 2012-03-29 | Evolution Robotics, Inc. | Systems and methods for vslam optimization |
US8711206B2 (en) * | 2011-01-31 | 2014-04-29 | Microsoft Corporation | Mobile camera localization using depth maps |
US9020187B2 (en) * | 2011-05-27 | 2015-04-28 | Qualcomm Incorporated | Planar mapping and tracking for mobile devices |
CN103959308B (en) * | 2011-08-31 | 2017-09-19 | Metaio有限公司 | The method that characteristics of image is matched with fixed reference feature |
CN103247225B (en) * | 2012-02-13 | 2015-04-29 | 联想(北京)有限公司 | Instant positioning and map building method and equipment |
CN102831646A (en) * | 2012-08-13 | 2012-12-19 | 东南大学 | Scanning laser based large-scale three-dimensional terrain modeling method |
KR101319525B1 (en) * | 2013-03-26 | 2013-10-21 | 고려대학교 산학협력단 | System for providing location information of target using mobile robot |
CN104161487B (en) * | 2013-05-17 | 2018-09-04 | 恩斯迈电子(深圳)有限公司 | Mobile device |
US9037396B2 (en) * | 2013-05-23 | 2015-05-19 | Irobot Corporation | Simultaneous localization and mapping for a mobile robot |
CN103279987B (en) * | 2013-06-18 | 2016-05-18 | 厦门理工学院 | Object quick three-dimensional modeling method based on Kinect |
US9759918B2 (en) * | 2014-05-01 | 2017-09-12 | Microsoft Technology Licensing, Llc | 3D mapping with flexible camera rig |
CN104236548B (en) * | 2014-09-12 | 2017-04-05 | 清华大学 | Autonomous navigation method in a kind of MAV room |
EP3078935A1 (en) * | 2015-04-10 | 2016-10-12 | The European Atomic Energy Community (EURATOM), represented by the European Commission | Method and device for real-time mapping and localization |
-
2015
- 2015-04-10 EP EP15163231.2A patent/EP3078935A1/en not_active Withdrawn
-
2016
- 2016-04-11 AU AU2016246024A patent/AU2016246024B2/en active Active
- 2016-04-11 ES ES16715548T patent/ES2855119T3/en active Active
- 2016-04-11 EP EP16715548.0A patent/EP3280977B1/en active Active
- 2016-04-11 KR KR1020177032418A patent/KR102427921B1/en active IP Right Grant
- 2016-04-11 CN CN201680034057.0A patent/CN107709928B/en active Active
- 2016-04-11 CA CA2982044A patent/CA2982044C/en active Active
- 2016-04-11 JP JP2017567094A patent/JP7270338B2/en active Active
- 2016-04-11 WO PCT/EP2016/057935 patent/WO2016162568A1/en active Search and Examination
-
2017
- 2017-10-10 US US15/729,469 patent/US10304237B2/en active Active
-
2018
- 2018-08-15 HK HK18110512.8A patent/HK1251294A1/en unknown
-
2021
- 2021-05-20 JP JP2021085516A patent/JP2021152662A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140005933A1 (en) * | 2011-09-30 | 2014-01-02 | Evolution Robotics, Inc. | Adaptive Mapping with Spatial Summaries of Sensor Data |
WO2015017941A1 (en) * | 2013-08-09 | 2015-02-12 | Sweep3D Corporation | Systems and methods for generating data indicative of a three-dimensional representation of a scene |
Non-Patent Citations (17)
Title |
---|
C. H. TONG; S. ANDERSON; H. DONG; T. D. BARFOOT: "Pose interpolation for laser-based visual odometry", JOURNAL OF FIELD ROBOTICS, vol. 31, 2014, pages 731 - 757 |
DUBOIS, D.: "Possibility theory and statistical reasoning", COMPUTATIONAL STATISTICS AND DATA ANALYSIS, vol. 51, no. 1, 2006, pages 47 - 69, XP024955501, DOI: doi:10.1016/j.csda.2006.04.015 |
F. MOOSMANN; C. STILLER: "Velodyne SLAM", IVS, 2011 |
GEIGER, A.; LENZ, P.; STILLER, C.; URTASUN, R.: "Vision meets robotics: The kitti dataset", INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013 |
GLOCKER, B.; IZADI, S.; SHOTTON, J.; CRIMINISI, A.: "Real-time rgb-d camera relocalization", ISMAR, 2013 |
H. STRASDAT: "Ph.D. dissertation", 2012, IMPERIAL COLLEGE LONDON, article "Local accuracy and global consistency for efficient slam" |
J. YAO; M. R. RUGGERI; P. TADDEI; V. SEQUEIRA: "Automatic scan registration using 3d linear and planar features", 3D RESEARCH, vol. 1, no. 3, 2010, pages 1 - 18 |
J. ZHANG; S. SINGH: "LOAM: Lidar odometry and mapping in real-time", RSS, 2014 |
K. MUSETH: "Vdb: High-resolution sparse volumes with dynamic topology", ACM TRANSACTION ON GRAPHICS, vol. 32, no. 3, 2013 |
M. NIEFTNER; M. ZOLLHOFER; S. IZADI; M. STAMMINGER: "Real-time 3d reconstruction at scale using voxel hashing", ACM TRANSACTIONS ON GRAPHICS, 2013 |
R. KUEMMERLE; G. GRISETTI; H. STRASDAT; K. KONOLIGE; W. BURGARD: "g2o: A general framework for graph optimization", ICRA, 2011 |
S. RUSINKIEWICZ; M. LEVOY: "Efficient variants of the ICP algorithm", 3DIM, 2001 |
TADDEI, P.; SANCHEZ, C.; RODRIGUEZ, A. L.; CERIANI, S.; SEQUEIRA, V.: "Detecting ambiguity in localization problems using depth sensors", 3DV, 2014 |
THRUN, S.; BURGARD, W.; FOX, D.: "Probabilistic Robotics (Intelligent Robotics and Autonomous Agents", 2005, THE MIT PRESS |
THRUN, S.; FOX, D.; BURGARD, W.; DELLAERT, F.: "Robust monte carlo localization for mobile robots", ARTIFICIAL INTELLIGENCE, vol. 128, no. 1, 2001, pages 99 - 141 |
TIMOTHY LIU ET AL: "Indoor localization and visualization using a human-operated backpack system", INDOOR POSITIONING AND INDOOR NAVIGATION (IPIN), 2010 INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 15 September 2010 (2010-09-15), pages 1 - 10, XP031809367, ISBN: 978-1-4244-5862-2 * |
YAO, J.; RUGGERI; M. R.; TADDEI, P.; SEQUEIRA, V.: "Automatic scan registration using 3d linear and planar features", 3D RESEARCH, vol. 1, no. 3, 2010, pages 1 - 18 |
Cited By (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
USRE48666E1 (en) | 2006-07-13 | 2021-08-03 | Velodyne Lidar Usa, Inc. | High definition LiDAR system |
USRE48504E1 (en) | 2006-07-13 | 2021-04-06 | Velodyne Lidar Usa, Inc. | High definition LiDAR system |
USRE48503E1 (en) | 2006-07-13 | 2021-04-06 | Velodyne Lidar Usa, Inc. | High definition LiDAR system |
USRE48490E1 (en) | 2006-07-13 | 2021-03-30 | Velodyne Lidar Usa, Inc. | High definition LiDAR system |
USRE48491E1 (en) | 2006-07-13 | 2021-03-30 | Velodyne Lidar Usa, Inc. | High definition lidar system |
USRE48688E1 (en) | 2006-07-13 | 2021-08-17 | Velodyne Lidar Usa, Inc. | High definition LiDAR system |
US10304237B2 (en) | 2015-04-10 | 2019-05-28 | The European Atomic Energy Community (Euratom), Represented By The European Commission | Method and device for real-time mapping and localization |
US11822012B2 (en) | 2016-01-31 | 2023-11-21 | Velodyne Lidar Usa, Inc. | Multiple pulse, LIDAR based 3-D imaging |
US11698443B2 (en) | 2016-01-31 | 2023-07-11 | Velodyne Lidar Usa, Inc. | Multiple pulse, lidar based 3-D imaging |
US11137480B2 (en) | 2016-01-31 | 2021-10-05 | Velodyne Lidar Usa, Inc. | Multiple pulse, LIDAR based 3-D imaging |
US11550036B2 (en) | 2016-01-31 | 2023-01-10 | Velodyne Lidar Usa, Inc. | Multiple pulse, LIDAR based 3-D imaging |
US11073617B2 (en) | 2016-03-19 | 2021-07-27 | Velodyne Lidar Usa, Inc. | Integrated illumination and detection for LIDAR based 3-D imaging |
US11550056B2 (en) | 2016-06-01 | 2023-01-10 | Velodyne Lidar Usa, Inc. | Multiple pixel scanning lidar |
US11561305B2 (en) | 2016-06-01 | 2023-01-24 | Velodyne Lidar Usa, Inc. | Multiple pixel scanning LIDAR |
US11874377B2 (en) | 2016-06-01 | 2024-01-16 | Velodyne Lidar Usa, Inc. | Multiple pixel scanning LIDAR |
US11808854B2 (en) | 2016-06-01 | 2023-11-07 | Velodyne Lidar Usa, Inc. | Multiple pixel scanning LIDAR |
US10983218B2 (en) | 2016-06-01 | 2021-04-20 | Velodyne Lidar Usa, Inc. | Multiple pixel scanning LIDAR |
US11608097B2 (en) | 2017-02-28 | 2023-03-21 | Thales Canada Inc | Guideway mounted vehicle localization system |
US10397739B2 (en) | 2017-03-03 | 2019-08-27 | Here Global B.V. | Supporting the creation of a radio map |
US11808891B2 (en) | 2017-03-31 | 2023-11-07 | Velodyne Lidar Usa, Inc. | Integrated LIDAR illumination power control |
US11703569B2 (en) | 2017-05-08 | 2023-07-18 | Velodyne Lidar Usa, Inc. | LIDAR data acquisition and control |
US11821987B2 (en) | 2017-09-13 | 2023-11-21 | Velodyne Lidar Usa, Inc. | Multiple resolution, simultaneous localization and mapping based on 3-D LIDAR measurements |
WO2019055691A1 (en) * | 2017-09-13 | 2019-03-21 | Velodyne Lidar, Inc. | Multiple resolution, simultaneous localization and mapping based on 3-d lidar measurements |
US11353589B2 (en) | 2017-11-17 | 2022-06-07 | Nvidia Corporation | Iterative closest point process based on lidar with integrated motion estimation for high definition maps |
WO2019099802A1 (en) | 2017-11-17 | 2019-05-23 | DeepMap Inc. | Iterative closest point process based on lidar with integrated motion estimation for high definitions maps |
CN111417871A (en) * | 2017-11-17 | 2020-07-14 | 迪普迈普有限公司 | Iterative closest point processing for integrated motion estimation using high definition maps based on lidar |
CN107943038A (en) * | 2017-11-28 | 2018-04-20 | 广东工业大学 | A kind of mobile robot embedded laser SLAM method and system |
CN108242079A (en) * | 2017-12-30 | 2018-07-03 | 北京工业大学 | A kind of VSLAM methods based on multiple features visual odometry and figure Optimized model |
CN108242079B (en) * | 2017-12-30 | 2021-06-25 | 北京工业大学 | VSLAM method based on multi-feature visual odometer and graph optimization model |
US10801855B2 (en) | 2018-01-12 | 2020-10-13 | Zhejiang Guozi Technology Co., Ltd. | Method and system for creating map based on 3D laser |
WO2019136714A1 (en) * | 2018-01-12 | 2019-07-18 | 浙江国自机器人技术有限公司 | 3d laser-based map building method and system |
WO2020005826A1 (en) * | 2018-06-25 | 2020-01-02 | Walmart Apollo, Llc | System and method for map localization with camera perspectives |
US11796648B2 (en) | 2018-09-18 | 2023-10-24 | Velodyne Lidar Usa, Inc. | Multi-channel lidar illumination driver |
US11082010B2 (en) | 2018-11-06 | 2021-08-03 | Velodyne Lidar Usa, Inc. | Systems and methods for TIA base current detection and compensation |
CN109541630A (en) * | 2018-11-22 | 2019-03-29 | 武汉科技大学 | A method of it is surveyed and drawn suitable for Indoor environment plane 2D SLAM |
US11885958B2 (en) | 2019-01-07 | 2024-01-30 | Velodyne Lidar Usa, Inc. | Systems and methods for a dual axis resonant scanning mirror |
CN109696168A (en) * | 2019-02-27 | 2019-04-30 | 沈阳建筑大学 | A kind of mobile robot geometry map creating method based on laser sensor |
CN110333513B (en) * | 2019-07-10 | 2023-01-10 | 国网四川省电力公司电力科学研究院 | Particle filter SLAM method fusing least square method |
CN110333513A (en) * | 2019-07-10 | 2019-10-15 | 国网四川省电力公司电力科学研究院 | A kind of particle filter SLAM method merging least square method |
US11933967B2 (en) | 2019-08-22 | 2024-03-19 | Red Creamery, LLC | Distally actuated scanning mirror |
CN110764096A (en) * | 2019-09-24 | 2020-02-07 | 浙江华消科技有限公司 | Three-dimensional map construction method for disaster area, robot and robot control method |
CN111190191A (en) * | 2019-12-11 | 2020-05-22 | 杭州电子科技大学 | Scanning matching method based on laser SLAM |
US11579097B2 (en) | 2020-04-06 | 2023-02-14 | Baker Hughes Holdings Llc | Localization method and system for mobile remote inspection and/or manipulation tools in confined spaces |
EP3893074A1 (en) | 2020-04-06 | 2021-10-13 | General Electric Company | Localization method for mobile remote inspection and/or manipulation tools in confined spaces and associated system |
CN111583369B (en) * | 2020-04-21 | 2023-04-18 | 天津大学 | Laser SLAM method based on facial line angular point feature extraction |
CN111583369A (en) * | 2020-04-21 | 2020-08-25 | 天津大学 | Laser SLAM method based on facial line angular point feature extraction |
CN111813111A (en) * | 2020-06-29 | 2020-10-23 | 佛山科学技术学院 | Multi-robot cooperative working method |
CN111813111B (en) * | 2020-06-29 | 2024-02-20 | 佛山科学技术学院 | Multi-robot cooperative working method |
WO2022052881A1 (en) * | 2020-09-14 | 2022-03-17 | 华为技术有限公司 | Map construction method and computing device |
WO2024013108A1 (en) * | 2022-07-11 | 2024-01-18 | Cambridge Enterprise Limited | 3d scene capture system |
Also Published As
Publication number | Publication date |
---|---|
KR102427921B1 (en) | 2022-08-02 |
CA2982044A1 (en) | 2016-10-13 |
JP2021152662A (en) | 2021-09-30 |
CN107709928B (en) | 2021-09-28 |
JP2018522345A (en) | 2018-08-09 |
EP3280977B1 (en) | 2021-01-06 |
CN107709928A (en) | 2018-02-16 |
US10304237B2 (en) | 2019-05-28 |
AU2016246024A1 (en) | 2017-11-09 |
EP3078935A1 (en) | 2016-10-12 |
US20180075643A1 (en) | 2018-03-15 |
AU2016246024B2 (en) | 2021-05-20 |
JP7270338B2 (en) | 2023-05-10 |
CA2982044C (en) | 2021-09-07 |
HK1251294A1 (en) | 2019-01-25 |
ES2855119T3 (en) | 2021-09-23 |
KR20180004151A (en) | 2018-01-10 |
EP3280977A1 (en) | 2018-02-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2016246024B2 (en) | Method and device for real-time mapping and localization | |
US10096129B2 (en) | Three-dimensional mapping of an environment | |
Botterill et al. | Bag‐of‐words‐driven, single‐camera simultaneous localization and mapping | |
Grant et al. | Efficient Velodyne SLAM with point and plane features | |
Yin et al. | Dynam-SLAM: An accurate, robust stereo visual-inertial SLAM method in dynamic environments | |
Sánchez et al. | Localization and tracking in known large environments using portable real-time 3D sensors | |
Kua et al. | Automatic loop closure detection using multiple cameras for 3d indoor localization | |
Huai et al. | Stereo-inertial odometry using nonlinear optimization | |
Viejo et al. | A robust and fast method for 6DoF motion estimation from generalized 3D data | |
Fehr et al. | Reshaping our model of the world over time | |
Birk et al. | Simultaneous localization and mapping (SLAM) | |
Botterill | Visual navigation for mobile robots using the bag-of-words algorithm | |
Heukels | Simultaneous Localization and Mapping (SLAM): towards an autonomous search and rescue aiding drone | |
Forsman | Three-dimensional localization and mapping of static environments by means of mobile perception | |
Song et al. | Scale Estimation with Dual Quadrics for Monocular Object SLAM | |
Qian | Weighted optimal linear attitude and translation estimator: Theory and application | |
Nava Chocron | Visual-lidar slam with loop closure | |
Yousif | 3D simultaneous localization and mapping in texture-less and structure-less environments using rank order statistics | |
Ivanchenko et al. | Elevation-based MRF stereo implemented in real-time on a GPU | |
Baligh Jahromi et al. | a Preliminary Work on Layout Slam for Reconstruction of Indoor Corridor Environments | |
Song et al. | MF-LIO: Integrating Multi-Feature LiDAR Inertial Odometry with FPFH Loop Closure in SLAM | |
Hokmabadi | Shaped-based IMU/Camera Tightly Coupled Object-level SLAM using Rao-Blackwellized Particle Filtering | |
Xiang | Fast and lightweight loop closure detection in LiDAR-based simultaneous localization and mapping | |
Mostofi | 3D Indoor Mobile Mapping using Multi-Sensor Autonomous Robot | |
Voorhies | Efficient slam for scanning lidar sensors using combined plane and point features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16715548 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
ENP | Entry into the national phase |
Ref document number: 2017567094 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2982044 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20177032418 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2016246024 Country of ref document: AU Date of ref document: 20160411 Kind code of ref document: A |