CN107301654B - Multi-sensor high-precision instant positioning and mapping method - Google Patents

Multi-sensor high-precision instant positioning and mapping method Download PDF

Info

Publication number
CN107301654B
CN107301654B CN201710437709.1A CN201710437709A CN107301654B CN 107301654 B CN107301654 B CN 107301654B CN 201710437709 A CN201710437709 A CN 201710437709A CN 107301654 B CN107301654 B CN 107301654B
Authority
CN
China
Prior art keywords
frame
laser radar
point cloud
camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710437709.1A
Other languages
Chinese (zh)
Other versions
CN107301654A (en
Inventor
王�琦
李学龙
王丞泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201710437709.1A priority Critical patent/CN107301654B/en
Publication of CN107301654A publication Critical patent/CN107301654A/en
Application granted granted Critical
Publication of CN107301654B publication Critical patent/CN107301654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a multi-sensor high-precision instant positioning and mapping method, which is used for carrying out high-precision real-time three-dimensional construction on the surrounding environment by using a color camera and a laser radar and measuring and calculating the position and the posture of the camera in real time. On the basis of rapid visual SLAM, information of a laser radar is introduced, the information of the laser radar is selectively used, results are corrected in time, a three-dimensional map is constructed according to the corrected results, positioning and map construction are enabled to have correct scale and higher accuracy, and meanwhile, an algorithm is enabled to have lower complexity.

Description

Multi-sensor high-precision instant positioning and mapping method
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to a high-precision instant positioning and mapping method for multiple sensors.
Background
At present, the movement of the robot mostly depends on manual path planning, and the autonomous navigation capability of the robot depends on an instant positioning and Mapping technology (SLAM). The core task of the robot is to efficiently and accurately construct (Mapping) the surrounding environment by using sensor information when the robot enters an unknown working environment, and simultaneously obtain the position and the posture (Localization) of equipment in the space. In addition to applications in the robotic field, spatial tracking and automated driving of virtual reality and augmented reality devices may also use SLAM technology.
The SLAM problem has been around for 30 years since its introduction, during which time the sensors and computing methods used in SLAM problems have changed dramatically. Most of the mainstream SLAM technologies employ visual sensors, including monocular sensors, and sensors for color images and depth information (RGB-D). In recent years, with the development of laser radar technology, the laser radar equipment has a wide development space in practical application by virtue of the characteristics of accurate estimation capability of the laser radar equipment on environmental measurement, insensitivity to illumination change and the like.
Currently mainstream vision-based SLAM schemes can be classified into two types according to optimization methods, one is a SLAM method using a filter, and the other is a SLAM method using graph optimization.
The SLAM scheme model using filters is simpler to build, but the errors will gradually accumulate and be irreparable. Davison et al, in the literature "A.Davison, I.Reid, and N.Molton.MonoSLAM: Real-Time Single Camera SLAM. IEEE Transactions on Pattern Analysis and Machine Analysis, pp. 1052-1067, 2007" propose a MonoSLAM scheme, which is a monocular sensor SLAM scheme based on an Extended Kalman Filter (EKF) that is solved by constructing landmark points in an image and modeling the SLAM using EKF.
The SLAM scheme using the graph optimization method is often large in calculation amount due to the fact that the attitude graph is required to be constructed. ORB-SLAM proposed by Artal et al in the documents "R.Artal, J.Montiel, and J.Tardos.ORB-SLAM A Versatile and Accuratemonoccular SLAM System. IEEE Transactions on Robotics, vol.31, No.5, pp 1147-.
The problem of the SLAM algorithm of the monocular vision sensor in practical application is as follows: monocular vision sensors have difficulty estimating the scale of the environment, i.e. the scale; the depth of the environment is obtained by mathematical calculations such as triangulation and the like, and errors are often generated; the vision sensor is difficult to handle scenes such as severe illumination, rapid movement, lack of textures and the like, so that the attitude estimation precision is reduced; to eliminate the above errors, large-scale optimization needs to be introduced, and the solution is difficult and time-consuming.
And the problems of slow calculation, insufficient information and the like can be caused when other sensors such as a laser radar sensor are singly used, and the SLAM task can not be well solved.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a multi-sensor high-precision instant positioning and mapping method, namely a method for completing high-precision instant positioning and mapping by fusing vision sensor and laser radar data. On the basis of rapid visual SLAM, by introducing the information of the laser radar, the method has higher positioning and mapping accuracy, and by selectively using the data of the laser radar, the complexity of the algorithm is lower under the condition of ensuring the accuracy.
A high-precision instant positioning and mapping method of multiple sensors is characterized by comprising the following steps:
step 1: inputting a color image and preprocessing: inputting a color image sequence shot by a camera, and converting each frame of color image into a gray image;
step 2: feature extraction: performing feature extraction on each frame of gray level image by using an ORB feature extraction algorithm to obtain ORB feature points of each frame of image;
and step 3: feature matching and motion estimation: matching the feature points of two adjacent frames of images by using an approximate nearest neighbor library, and performing motion estimation on the matched point pairs by using a random sampling consensus (PnP) solution to obtain a motion estimation sequence of the camera
Figure GDA0002356131470000021
Wherein the content of the first and second substances,
Figure GDA0002356131470000022
for the motion estimation of the camera between the shooting of the ith frame and the ith-1 frame, i is 2, …, N, N is the number of image frames, RCamRepresenting the camera rotation matrix, tCamRepresenting a camera displacement matrix;
and 4, step 4: inputting corresponding laser radar data and preprocessing: inputting laser radar data, removing points with the distance being more than 0.8 time of the laser radar range, and forming laser radar point cloud data by the remaining points; here, it is assumed that the position relationship between the laser radar and the camera for capturing the color image in step 1 is fixed, joint calibration is completed, the two fields of view are the same, and the internal parameters are known;
and 5: characteristic matching and motion estimation of laser radar point cloud data: performing feature matching and motion estimation on the laser radar point cloud data every x frames, namely dividing each frame of laser radar point cloud data into four equal-size areas in the horizontal direction, extracting 4 edge points and 4 plane points according to smoothness to form feature points in each area, matching the feature points of the jth frame and the jth-x frame data according to the smoothness, performing motion estimation on the matched point pairs by using an ICP (inductively coupled plasma) algorithm to obtain a motion estimation sequence of the laser radar
Figure GDA0002356131470000031
Wherein the content of the first and second substances,
Figure GDA0002356131470000032
for the motion estimation of the laser radar between the j frame and the j-x frame, j is x +1,2x +1, …, M, M is the total frame number of the laser radar data, and x has the value range of [5,20 ]],RLidarRepresenting the lidar rotation matrix, tLidarRepresenting a lidar displacement matrix;
the edge point is the point with the minimum smoothness in the area, the plane point is the point with the maximum smoothness in the area, and the smoothness is
Figure GDA0002356131470000033
Where S represents the set of all points in the k-th scan, Lk,pAnd Lk,qRepresenting the space coordinates of the points p and q, | | | · | |, representing the Euclidean distance; the step of matching according to the smoothness is that if the difference of the smoothness is less than 5% of the smoothness of the current point, matching is carried out, otherwise, mismatching is carried out;
step 6: error correction and track recording: replacing the sum of the motion estimation of the camera in the corresponding time interval with the motion estimation of the laser radar obtained in the step 5 to obtain a camera motion estimation sequence S' after error correction, namely a motion track of the camera;
and 7: establishing a three-dimensional point cloud map: according to
Figure GDA0002356131470000034
Calculating to obtain a three-dimensional point cloud map pclmapWherein, in the step (A),
Figure GDA0002356131470000035
is n thjConverting the frame image into a three-dimensional point cloud map under a first frame image view angle coordinate system, and setting the image frame number of the mth frame of laser radar point cloud scanning at the same moment as njFrom n to njThe sequence is N, M ═ x +1,2x +1, …, M;
Figure GDA0002356131470000036
is n thjPoint cloud map of frame image, L is the n-th extracted in step 2jNumber of feature points of frame image (x)l,yl,zl) The coordinates representing the points are calculated according to the following formula:
xl=(ul-cx)·dl/fx
yl=(vl-cy)·dl/fy
zl=dl
wherein, L is 1, …, L, (u)l, vl) Is the image pixel coordinate of the ith feature point, (f)x,fy,cx,cy) For a given camera parameter, dlRetrieving a depth value of the first characteristic point in the laser radar point cloud corresponding to the image shooting moment;
if the point cloud density is too large, downsampling the point cloud map data to obtain a final three-dimensional point cloud map; the point cloud density is too large, namely the number of the middle points in one cubic meter space exceeds 10000.
The invention has the beneficial effects that: on the basis of rapid visual SLAM, the information of the laser radar is combined, so that the positioning and mapping in the SLAM task have higher precision and correct scale, and the defects of slower SLAM speed, no color information and the like of a single laser radar are overcome; on the premise of ensuring the accuracy, the algorithm selectively uses laser radar information, so that the real-time performance and low complexity of the algorithm are ensured; the algorithm can output the motion estimation result in real time, correct the result in time and construct a three-dimensional map by depending on the corrected result, thereby not only ensuring the practicability of the algorithm, but also ensuring the algorithm to have higher precision in a longer time.
Drawings
FIG. 1 is a flow chart of a high-precision instant positioning and mapping method for multiple sensors according to the present invention
Detailed Description
The present invention will be further described with reference to the following drawings and examples, which include, but are not limited to, the following examples.
The invention provides a multi-sensor high-precision instant positioning and mapping method, which uses a color camera and a laser radar to carry out high-precision real-time three-dimensional construction on the surrounding environment and calculates the position and the posture of the camera for shooting images in real time. The premise for realizing the method of the invention is that: the position relation between the camera used for shooting and the laser radar is fixed, namely the camera and the laser radar have the same displacement and deflection during shooting. And the laser radar is calibrated with the camera in a combined manner, namely, each pixel in the image can acquire the depth value of the pixel in the laser radar three-dimensional point cloud, the two fields of view are the same, and the internal parameters are known.
As shown in fig. 1, the high-precision real-time positioning and mapping method for multiple sensors of the present invention is implemented as follows:
step 1: input color image and pre-processing
The present embodiment captures a color image using a PointGray flow 2 color camera. The original resolution of the image is 1392 × 1040, and the frame rate is 17fps (frames per second), that is, an image sequence of 17 frames is input one second. Firstly, the image is preprocessed, namely, the color image is converted into a gray image for subsequent operation.
Step 2: feature extraction
For the grayscale image input in the previous step, orb (organized FAST and rotatedbrief) feature points are first extracted frame by frame. The feature points can be replaced by other types of feature points, such as SIFT, SURF, and the like, according to the requirement, but since the extraction of the feature points is time-consuming, the operation speed of the whole algorithm may be affected.
ORB (organized FAST and rotaed BRIEF) is an algorithm for FAST feature point extraction and description. This algorithm was proposed in 2011 by Ruble et al, "Ruble Ethan, et al," ORB: An experimental to SIFT or SURF. "IEEE International Conference on Computer Vision IEEE 2012: 2564-. The ORB algorithm is divided into two parts, namely feature point extraction and feature point description. The feature extraction is developed by the fast (features from accessed Segment test) algorithm, and the feature point description is improved according to the brief (binary Robust Independent feature features) feature description algorithm. The ORB feature is to combine the detection method of FAST feature points with BRIEF feature descriptors and make improvements and optimization on the original basis. Experiments show that the extraction speed of ORB features is about 100 times of SIFT and 10 times of SURF.
The embodiment performs feature extraction on each frame image directly using the ORB feature descriptor extraction function in the computer vision library OPENCV.
And step 3: feature matching and motion estimation
And performing feature matching on two adjacent frames of images after the feature points are extracted by using a near nearest neighbor library (FLANN). The FLANN library is the most complete approximate nearest neighbor matching open source library at present, not only realizes a series of search algorithms, but also comprises a mechanism for automatically selecting the fastest algorithm. The FLANN basic Algorithm was proposed by Muja et al in 2009, "M.Muja and D.Lowe," Fast application exploration computers with Automatic Algorithm Configuration ", in International conference on Computer Vision therapy and Applications (VIAPP' 09), 2009.
After extracting the feature points of each newly input frame of image, performing feature matching on the previous frame of image by using a FLANNbasedMatcher function in a computer vision library OPENCV to obtain a plurality of pairs of corresponding feature points in the two frames of images; then, PnP solution of a RANdom SAmple Consensus (RANSAC) based on a matching result is used for estimating the posture change between two frames of images shot by the camera. RANSAC is an algorithm for obtaining valid sample data by calculating mathematical model parameters of data according to a set of sample data sets including abnormal data. Fischler, M.A. and Bolles, R.C. random Sample Consensus: A Paradigo for Model fixing with Applications to Image Analysis and Automatied cartography of the ACM,24(6): 381-395, 1981 ", was first introduced by Fischler and Bolles in 1981. The goal of the PnP Solution is to determine the displacement and rotation of the camera relative to the world coordinate system using pairs of well-known matched planar points, as proposed by Moreno-Noguer in 2007, "F. The present embodiment operates directly using the solution PnPpransac function in the computer vision library OPENCV.
Through the steps, the motion estimation (R) of the camera between any two adjacent frames of images relative to the world coordinate system (the coordinate system with the view angle of the first frame of image as the main direction) can be obtainedCam,tCam) And motion estimation sequence of camera
Figure GDA0002356131470000061
Wherein the content of the first and second substances,
Figure GDA0002356131470000062
for the motion estimation of the camera between the shooting of the ith frame and the ith-1 frame, i is 2, …, N, N is the number of image frames, RCamRepresenting the camera rotation matrix, tCamRepresenting a camera displacement matrix.
Because a large amount of noise exists in the characteristic point extraction and matching process, the attitude change obtained in the step still has certain error, and the correction is carried out by using the step 6.
And 4, step 4: inputting corresponding laser radar data and preprocessing
This example uses the HDL-64E lidar manufactured by Velodyne corporation, USA, which is a small multiline panoramic lidar with a longitudinal 26.9 degrees and a transverse 360 degrees field of view. 220000 points are obtained every second and are evenly distributed on 64 environmental scanning lines to represent the distance and the position relation of surrounding space points from the center point of the laser radar. The effective measurement range is 120 m.
In practical use, in order to reduce errors, points which exceed 0.8 times of the effective measurement range are removed, namely spatial points with the measurement distance being more than 80m are removed, and then all the remaining points are imported to form point cloud data.
And 5: feature matching and motion estimation of laser radar point cloud data
In the step, the invention extracts the characteristics and carries out motion estimation once every x frames according to the input point cloud data of the laser radar so as to reduce the algorithm complexity and improve the calculation efficiency, thus obtaining the motion estimation sequence of the laser radar
Figure GDA0002356131470000063
Wherein the content of the first and second substances,
Figure GDA0002356131470000064
for the motion estimation of the laser radar between the j frame and the j-x frame, j is x +1,2x +1, …, M, M is the laser radar data frame number, and x is in the range of [5,20 ]]In this embodiment, x is 5, RLidarRepresenting the lidar rotation matrix, tLidarRepresenting a lidar displacement matrix. The method specifically comprises the following steps:
because the point cloud is a discrete three-dimensional point in space, except coordinates, no other information exists, and the coordinates of the same space point are constantly changed in the moving data acquisition process, traceable features need to be defined in the point cloud, and therefore a feature extraction mode based on smoothness is defined, namely smoothness c is defined as:
Figure GDA0002356131470000065
where S represents the set of all points in the k-th scan, Lk,pAnd Lk,qRepresenting the space coordinates of the points p and q, | | | · | |, represents the euclidean distance.
And calculating the smoothness of all the points in the S, wherein the point with the maximum smoothness is defined as a plane point, and the point with the minimum smoothness is defined as an edge point. In order to avoid excessive generated feature points, the point cloud of each scan is firstly divided into four equal-size areas, for example, four main directions of 0 °, 90 °, 180 ° and 270 ° are taken, and ± 45 ° are respectively taken from the left and right of each direction to divide the area. Each region has at most 4 edge points and 4 plane points constituting feature points.
After the characteristic points are obtained, matching the characteristic points of the jth frame and the jth-x frame data according to smoothnessAnd the characteristic point pair with the smoothness difference smaller than 5% of the smoothness of the current point is regarded as the expression of the same space point in two scans, namely the matched point pair. After the matched feature points are obtained, motion estimation is carried out by utilizing an ICP (Iterative closed Point) algorithm, namely motion estimation (R) of the laser radar is obtained by solving two groups of three-dimensional space pointsLidar,tLidar). The ICP algorithm was proposed by Zhang et al, "Zhang, Zhengyou (1994)," Iterative point mapping for free-form curves and surfaces ". International Journal of computer Vision. Springer.13(12): 119-"
Step 6: error correction and track recording
Because the laser radar point cloud data has high precision and reliable data, the error of the positioning result is extremely small, but the operation speed is slow; while the color image based algorithm is fast in operation speed, but poor in accuracy. According to the method, aiming at different characteristics of attitude tracking of two sensors, error correction is performed by adopting a mode of replacing the sum of camera motion estimation by laser radar motion estimation, so that the accuracy of the overall algorithm is improved, and the overall time complexity of the algorithm is reduced.
And replacing the sum of the motion estimation of the camera in the corresponding time interval (the time interval of scanning the jth frame and the jth-x frame data) by the motion estimation (the primary motion estimation between scanning the jth frame and the jth-x frame data) of the laser radar obtained in the step 5 to obtain an error-corrected camera motion estimation sequence, namely the motion trail of the camera.
And 7: establishing three-dimensional point cloud map
When a high-precision three-dimensional point cloud map is finally established, only the motion estimation (R) of the laser radar with small error is usedLidar,tLidar). The method specifically comprises the following steps:
according to
Figure GDA0002356131470000071
Calculating to obtain a three-dimensional point cloud map pclmapWherein
Figure GDA0002356131470000072
Is m atNth shot by camera at same moment of frame laser radar point cloud scanningjFrame image (i.e. setting the image frame number of the camera shooting at the same moment when the laser radar scans the mth frame point cloud as njFrom n to njThe sequence is N) is transformed to a three-dimensional point cloud map under a first frame image visual angle coordinate system,
Figure GDA0002356131470000081
is n thjPoint cloud map of frame image, L is the n-th point in step 2jThe number of feature points (x) extracted from the frame imagel,yl,zl) The coordinates representing the points are calculated according to the following formula:
xl=(ul-cx)·dl/fx
yl=(vl-cy)·dl/fy
zl=dl
wherein, L is 1, …, L, (u)l, vl) Is the image coordinate of the ith feature point, (f)x,fy,cx,cy) For a given camera parameter, dlAnd searching the depth value of the ith characteristic point in the laser radar point cloud corresponding to the image shooting moment.
It can be seen that the point cloud map constructed in this way is superimposed by multiple scans, and is overall too dense. In order to improve the display speed and reduce the density degree of the point cloud, a space point density threshold is set on the storage of the point cloud, namely when the number of space points in one cubic meter of space exceeds 10000, point cloud data are down-sampled to ensure that the number of the space points is in a reasonable range, and a final point cloud map is obtained. Of course, when scanning a narrow space, the spatial point density threshold may be increased accordingly.
The present embodiment is implemented by a central processing unit
Figure GDA0002356131470000082
Experiments carried out on computers of i 5-45903.4 GHz CPU, memory 16G and Ubuntu14.04 operating system use third-party data sets in urban ringThe phase machine motion trajectory was determined to demonstrate the degree of accuracy.
The sequence 13 used in The experiment was derived from KITTI Dataset, which was proposed by A.Geiger et al in The literature "A.Geiger and Philip Lenz, Vision meets Robotics: The KITTI Dataset, International journal of Robotics Research, 2013", color images taken by a Point gray flow 2 color cameras, point cloud data collected by a Velodyne HDL-64E lidar.
The proportion (%) of the offset distance between the motion track of the camera and the true value in the size of the map and the time consumed by the algorithm are used as performance parameters, and the result is shown in table 1 by comparing the method with the main ORB-SLAM algorithm in monocular vision. It can be seen that after the laser radar information is introduced, the method not only improves the overall operation speed, but also surpasses the mainstream visual SLAM method in precision. The invention can also be applied to aspects such as automatic driving, robot autonomous navigation and the like.
TABLE 1 comparison of Algorithm Performance
Method of producing a composite material Time consuming(s) Offset distance (%)
The method of the invention 120.3 1.8
ORB-SLAM 126.0 3.1

Claims (1)

1. A high-precision instant positioning and mapping method of multiple sensors is characterized by comprising the following steps:
step 1: inputting a color image and preprocessing: inputting a color image sequence shot by a camera, and converting each frame of color image into a gray image;
step 2: feature extraction: performing feature extraction on each frame of gray level image by using an ORB feature extraction algorithm to obtain ORB feature points of each frame of image;
and step 3: feature matching and motion estimation: matching the feature points of two adjacent frames of images by using an approximate nearest neighbor library, and performing motion estimation on the matched point pairs by using a random sampling consensus (PnP) solution to obtain a motion estimation sequence of the camera
Figure FDA0002356131460000011
Wherein the content of the first and second substances,
Figure FDA0002356131460000012
for the motion estimation of the camera between the shooting of the ith frame and the ith-1 frame, i is 2, …, N, N is the number of image frames, RCamRepresenting the camera rotation matrix, tCamRepresenting a camera displacement matrix;
and 4, step 4: inputting corresponding laser radar data and preprocessing: inputting laser radar data, removing points with the distance being more than 0.8 time of the laser radar range, and forming laser radar point cloud data by the remaining points; here, it is assumed that the position relationship between the laser radar and the camera for capturing the color image in step 1 is fixed, joint calibration is completed, the two fields of view are the same, and the internal parameters are known;
and 5: characteristic matching and motion estimation of laser radar point cloud data: performing feature matching and motion estimation on the laser radar point cloud data every x frames, namely dividing each frame of laser radar point cloud data into four equal-size areas in the horizontal direction, extracting 4 edge points and 4 plane points to form feature points according to smoothness in each area, matching the feature points of the jth frame and the jth-x frame data according to the smoothness, and performing motion on the matched point pairs by using an ICP (inductively coupled plasma) algorithmEstimating to obtain a motion estimation sequence of the laser radar
Figure FDA0002356131460000013
Wherein the content of the first and second substances,
Figure FDA0002356131460000014
for the motion estimation of the laser radar between the j frame and the j-x frame, j is x +1,2x +1, …, M, M is the total frame number of the laser radar data, and x has the value range of [5,20 ]],RLidarRepresenting the lidar rotation matrix, tLidarRepresenting a lidar displacement matrix;
the edge point is the point with the minimum smoothness in the area, the plane point is the point with the maximum smoothness in the area, and the smoothness is
Figure FDA0002356131460000015
Where S represents the set of all points in the k-th scan, Lk,pAnd Lk,qRepresenting the space coordinates of the points p and q, | | | · | |, representing the Euclidean distance; the step of matching according to the smoothness is that if the difference of the smoothness is less than 5% of the smoothness of the current point, matching is carried out, otherwise, mismatching is carried out;
step 6: error correction and track recording: replacing the sum of the motion estimation of the camera in the corresponding time interval with the motion estimation of the laser radar obtained in the step 5 to obtain a camera motion estimation sequence S' after error correction, namely a motion track of the camera;
and 7: establishing a three-dimensional point cloud map: according to
Figure FDA0002356131460000021
Calculating to obtain a three-dimensional point cloud map pclmapWherein, in the step (A),
Figure FDA0002356131460000022
is n thjConverting the frame image to a three-dimensional point cloud map under a first frame image view angle coordinate system, and setting the same time of point cloud scanning of the mth frame laser radarThe image frame number of the camera shooting is njFrom n to njThe sequence is N, M ═ x +1,2x +1, …, M;
Figure FDA0002356131460000023
is n thjPoint cloud map of frame image, L is the n-th extracted in step 2jNumber of feature points of frame image (x)l,yl,zl) The coordinates representing the points are calculated according to the following formula:
xl=(ul-cx)·dl/fx
yl=(vl-cy)·dl/fy
zl=dl
wherein, L is 1, …, L, (u)l,vl) Is the image coordinate of the ith feature point, (f)x,fy,cx,cy) For a given camera parameter, dlRetrieving a depth value of the first characteristic point in the laser radar point cloud corresponding to the image shooting moment;
if the point cloud density is too large, downsampling the point cloud map data to obtain a final three-dimensional point cloud map; the point cloud density is too large, namely the number of the middle points in one cubic meter space exceeds 10000.
CN201710437709.1A 2017-06-12 2017-06-12 Multi-sensor high-precision instant positioning and mapping method Active CN107301654B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710437709.1A CN107301654B (en) 2017-06-12 2017-06-12 Multi-sensor high-precision instant positioning and mapping method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710437709.1A CN107301654B (en) 2017-06-12 2017-06-12 Multi-sensor high-precision instant positioning and mapping method

Publications (2)

Publication Number Publication Date
CN107301654A CN107301654A (en) 2017-10-27
CN107301654B true CN107301654B (en) 2020-04-03

Family

ID=60135365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710437709.1A Active CN107301654B (en) 2017-06-12 2017-06-12 Multi-sensor high-precision instant positioning and mapping method

Country Status (1)

Country Link
CN (1) CN107301654B (en)

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109974742A (en) * 2017-12-28 2019-07-05 沈阳新松机器人自动化股份有限公司 A kind of laser Method for Calculate Mileage and map constructing method
CN108319976B (en) * 2018-01-25 2019-06-07 北京三快在线科技有限公司 Build drawing method and device
CN108399643A (en) * 2018-03-15 2018-08-14 南京大学 A kind of outer ginseng calibration system between laser radar and camera and method
CN108489496B (en) * 2018-04-28 2021-02-05 北京空间飞行器总体设计部 Non-cooperative target relative navigation motion estimation method and system based on multi-source information fusion
CN108896994A (en) * 2018-05-11 2018-11-27 武汉环宇智行科技有限公司 A kind of automatic driving vehicle localization method and equipment
CN108647646B (en) * 2018-05-11 2019-12-13 北京理工大学 Low-beam radar-based short obstacle optimized detection method and device
CN108734654A (en) * 2018-05-28 2018-11-02 深圳市易成自动驾驶技术有限公司 It draws and localization method, system and computer readable storage medium
CN108759823B (en) * 2018-05-28 2020-06-30 浙江大学 Low-speed automatic driving vehicle positioning and deviation rectifying method on designated road based on image matching
CN109031304A (en) * 2018-06-06 2018-12-18 上海国际汽车城(集团)有限公司 Vehicle positioning method in view-based access control model and the tunnel of millimetre-wave radar map feature
CN109073398B (en) * 2018-07-20 2022-04-08 达闼机器人有限公司 Map establishing method, positioning method, device, terminal and storage medium
CN109166149B (en) * 2018-08-13 2021-04-02 武汉大学 Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN108709560A (en) * 2018-08-15 2018-10-26 苏州中研讯科智能科技有限公司 Carrying robot high accuracy positioning air navigation aid based on straightway feature
CN109326006B (en) * 2018-09-30 2023-03-28 阿波罗智联(北京)科技有限公司 Map fusion method and device
CN109341706B (en) * 2018-10-17 2020-07-03 张亮 Method for manufacturing multi-feature fusion map for unmanned vehicle
SG11201811747XA (en) * 2018-11-15 2020-06-29 Beijing Didi Infinity Technology & Development Co Ltd Systems and methods for correcting a high-definition map based on detection of obstructing objects
CN109579843B (en) * 2018-11-29 2020-10-27 浙江工业大学 Multi-robot cooperative positioning and fusion image building method under air-ground multi-view angles
CN109634279B (en) * 2018-12-17 2022-08-12 瞿卫新 Object positioning method based on laser radar and monocular vision
CN109407115B (en) * 2018-12-25 2022-12-27 中山大学 Laser radar-based pavement extraction system and extraction method thereof
CN111481109B (en) * 2019-01-28 2022-08-26 北京奇虎科技有限公司 Map noise elimination method and device based on sweeper
EP3707467A1 (en) * 2019-01-30 2020-09-16 Baidu.com Times Technology (Beijing) Co., Ltd. A rgb point clouds based map generation system for autonomous vehicles
CN111694903B (en) * 2019-03-11 2023-09-12 北京地平线机器人技术研发有限公司 Map construction method, device, equipment and readable storage medium
CN110068824B (en) * 2019-04-17 2021-07-23 北京地平线机器人技术研发有限公司 Sensor pose determining method and device
CN110163968B (en) * 2019-05-28 2020-08-25 山东大学 RGBD camera large three-dimensional scene construction method and system
CN110398745A (en) * 2019-08-05 2019-11-01 湖南海森格诺信息技术有限公司 Fork truck localization method based on laser radar and vision
CN110823211A (en) * 2019-10-29 2020-02-21 珠海市一微半导体有限公司 Multi-sensor map construction method, device and chip based on visual SLAM
CN111207670A (en) * 2020-02-27 2020-05-29 河海大学常州校区 Line structured light calibration device and method
CN111428578B (en) * 2020-03-03 2021-08-17 深圳市镭神智能系统有限公司 Self-body and positioning method and device thereof
CN111912431B (en) * 2020-03-19 2021-05-11 中山大学 Method for testing positioning accuracy of mobile robot navigation system
CN111445530A (en) * 2020-03-24 2020-07-24 广东博智林机器人有限公司 Wheelchair bed and wheelchair homing path determination method
CN112150507B (en) * 2020-09-29 2024-02-02 厦门汇利伟业科技有限公司 3D model synchronous reproduction method and system for object posture and displacement
CN112230211A (en) * 2020-10-15 2021-01-15 长城汽车股份有限公司 Vehicle positioning method and device, storage medium and vehicle
CN112132754B (en) * 2020-11-25 2021-06-04 蘑菇车联信息科技有限公司 Vehicle movement track correction method and related device
CN112484746B (en) * 2020-11-26 2023-04-28 上海电力大学 Monocular vision auxiliary laser radar odometer method based on ground plane
TWI772177B (en) * 2021-09-10 2022-07-21 迪伸電子股份有限公司 Movement control method of automatic guided device and automatic guided device
CN114863075B (en) * 2022-07-05 2022-10-14 深圳市新天泽消防工程有限公司 Fire-fighting evacuation path planning method, device and equipment based on multiple sensors

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7135992B2 (en) * 2002-12-17 2006-11-14 Evolution Robotics, Inc. Systems and methods for using multiple hypotheses in a visual simultaneous localization and mapping system
US10027952B2 (en) * 2011-08-04 2018-07-17 Trx Systems, Inc. Mapping and tracking system with features in three-dimensional space
CN104359464A (en) * 2014-11-02 2015-02-18 天津理工大学 Mobile robot positioning method based on stereoscopic vision
CN104764457B (en) * 2015-04-21 2017-11-17 北京理工大学 A kind of urban environment patterning process for unmanned vehicle
CN104933708A (en) * 2015-06-07 2015-09-23 浙江大学 Barrier detection method in vegetation environment based on multispectral and 3D feature fusion
CN105856230B (en) * 2016-05-06 2017-11-24 简燕梅 A kind of ORB key frames closed loop detection SLAM methods for improving robot pose uniformity
CN106052674B (en) * 2016-05-20 2019-07-26 青岛克路德机器人有限公司 A kind of SLAM method and system of Indoor Robot
CN106127739B (en) * 2016-06-16 2021-04-27 华东交通大学 Monocular vision combined RGB-D SLAM method
CN106153048A (en) * 2016-08-11 2016-11-23 广东技术师范学院 A kind of robot chamber inner position based on multisensor and Mapping System
CN106296812B (en) * 2016-08-18 2019-04-02 宁波傲视智绘光电科技有限公司 It is synchronous to position and build drawing method
CN106443687B (en) * 2016-08-31 2019-04-16 欧思徕(北京)智能科技有限公司 A kind of backpack mobile mapping system based on laser radar and panorama camera
CN106446815B (en) * 2016-09-14 2019-08-09 浙江大学 A kind of simultaneous localization and mapping method
CN106595659A (en) * 2016-11-03 2017-04-26 南京航空航天大学 Map merging method of unmanned aerial vehicle visual SLAM under city complex environment
CN106681330A (en) * 2017-01-25 2017-05-17 北京航空航天大学 Robot navigation method and device based on multi-sensor data fusion

Also Published As

Publication number Publication date
CN107301654A (en) 2017-10-27

Similar Documents

Publication Publication Date Title
CN107301654B (en) Multi-sensor high-precision instant positioning and mapping method
CN106127739B (en) Monocular vision combined RGB-D SLAM method
Kerl et al. Robust odometry estimation for RGB-D cameras
CN108519102B (en) Binocular vision mileage calculation method based on secondary projection
CN107818598B (en) Three-dimensional point cloud map fusion method based on visual correction
CN112419497A (en) Monocular vision-based SLAM method combining feature method and direct method
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
CN113393524B (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN108305277A (en) A kind of heterologous image matching method based on straightway
CN111998862A (en) Dense binocular SLAM method based on BNN
Platinsky et al. Monocular visual odometry: Sparse joint optimisation or dense alternation?
Zhou et al. Semi-dense visual odometry for RGB-D cameras using approximate nearest neighbour fields
Cvišić et al. Enhanced calibration of camera setups for high-performance visual odometry
Ruchay et al. Accurate reconstruction of the 3D indoor environment map with a RGB-D camera based on multiple ICP
CN117197333A (en) Space target reconstruction and pose estimation method and system based on multi-view vision
CN112307917A (en) Indoor positioning method integrating visual odometer and IMU
CN116894876A (en) 6-DOF positioning method based on real-time image
Wei et al. Overview of visual slam for mobile robots
CN112991372B (en) 2D-3D camera external parameter calibration method based on polygon matching
Zhang et al. Research on building algorithm of indoor dense map based on ORB-SALM2
Yang et al. Targetless Extrinsic Calibration for LiDAR and Camera Based on Multi-scale Adaptive Voxelization
CN117611677B (en) Robot positioning method based on target detection and structural characteristics
Qin et al. The comparison between FTF-VO and MF-VO for high accuracy mobile robot localization
Pan et al. LiDAR-IMU Tightly-Coupled SLAM Method Based on IEKF and Loop Closure Detection
Brahmanage et al. Building 2D Maps with Integrated 3D and Visual Information using Kinect Sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant