CN115330866A - Mapping and positioning method fusing point clouds of laser radar and depth camera - Google Patents

Mapping and positioning method fusing point clouds of laser radar and depth camera Download PDF

Info

Publication number
CN115330866A
CN115330866A CN202210951441.4A CN202210951441A CN115330866A CN 115330866 A CN115330866 A CN 115330866A CN 202210951441 A CN202210951441 A CN 202210951441A CN 115330866 A CN115330866 A CN 115330866A
Authority
CN
China
Prior art keywords
point cloud
laser radar
depth camera
point
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210951441.4A
Other languages
Chinese (zh)
Inventor
胡雪亮
贺晓伟
李超
朱秋国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Deeprobotics Co ltd
Original Assignee
Hangzhou Deeprobotics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Deeprobotics Co ltd filed Critical Hangzhou Deeprobotics Co ltd
Priority to CN202210951441.4A priority Critical patent/CN115330866A/en
Publication of CN115330866A publication Critical patent/CN115330866A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a mapping and positioning method fusing point clouds of a laser radar and a depth camera, which comprises the following steps of: s1, acquiring laser point cloud and depth camera point cloud to be processed; s2, performing external parameter calibration, determining the coordinate axis relation between the laser radar and the depth camera, and converting the point cloud of the depth camera into a laser radar coordinate system; s3, re-dividing the laser radar and the depth camera point cloud of the laser radar coordinate system obtained through conversion according to a laser radar scanning line, converting the depth camera point cloud according to the point cloud format of the laser radar, and fusing to form a frame of point cloud with the same format; s4, obtaining a point cloud picture of the environment; and S5, obtaining the absolute pose of the quadruped robot in the environment, and obtaining the pose of the quadruped robot in the motion process by fusing the IMU sensor and the foot type odometer. The invention can obtain accurate positioning information and adapt to the complex environment of the convertor station; the used computing resources are few, and the algorithm can be carried on a common computing platform; the repeated positioning is high.

Description

Mapping and positioning method fusing point clouds of laser radar and depth camera
Technical Field
The invention belongs to the technical field of robot positioning, and particularly relates to a mapping and positioning method fusing a laser radar and depth camera point cloud.
Background
Due to the deep and breakthrough of artificial intelligence research, the intelligent mobile robot has been widely applied in general fields. However, the environment of the converter station is complex and severe, people have more severe requirements on the inspection robot of the converter station, and the inspection robot is deployed in the converter station and has a plurality of difficulties. The autonomous map building and positioning technology is the key for solving the intelligent inspection of the converter station.
The general method is that a laser radar and a camera are arranged on a robot at the same time, and the laser radar and the camera are used for realizing the positioning function at the same time, but the laser radar can only accurately position a region with a longer distance, such as a distance exceeding 80 meters beyond 2 meters, and the camera can only accurately position a region with a shorter distance, such as a distance exceeding 2 meters beyond 0.1 meter. In order to obtain overall accurate positioning at long distance and short distance, positioning information acquired by the laser radar and the camera needs to be fused.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides the mapping and positioning method fusing the laser radar and the depth camera point cloud, which can convert the point cloud information acquired by the depth camera into a laser radar line and fuse the positioning information acquired by the laser radar and the depth camera to acquire accurate overall positioning information.
The technical scheme adopted by the invention for solving the technical problems is as follows: a mapping and positioning method fusing point clouds of a laser radar and a depth camera comprises the following steps:
s1, acquiring laser point clouds and depth camera point clouds to be processed, wherein the laser point clouds and the depth camera point clouds comprise point clouds which belong to the same spatial range and point clouds which do not belong to the same spatial range;
s2, performing external reference calibration on the laser radar and the depth camera, determining the coordinate axis relation between the laser radar and the depth camera, and converting point cloud of the depth camera into a laser radar coordinate system;
s3, according to the field angle of the depth camera and the laser radar, re-dividing the laser radar and the depth camera point cloud of the laser radar coordinate system obtained through conversion according to a laser radar scanning line, converting the depth camera point cloud according to the point cloud format of the laser radar, and fusing to form a frame of point cloud with the same format;
s4, based on the point cloud fused in the S3 and IMU sensor data fusion, carrying out environment mapping to obtain an environment point cloud map;
and S5, matching the point cloud picture established in the S4 based on the point cloud picture fused in the S3 to obtain the absolute pose of the quadruped robot in the environment, and fusing the IMU sensor and the foot-type odometer to obtain the pose of the quadruped robot in the motion process.
Further, the S1 step includes the following sub-steps,
s11, a device for synchronizing the time of collecting the point cloud of the laser radar and the point cloud of the depth camera is arranged;
s12, filtering depth point clouds with inaccurate measurement in a set range of the depth camera;
s13, filtering laser radar point clouds with inaccurate measurement of the laser radar in a set range and point clouds on equipment for installing the laser radar;
and S14, filtering the laser radar and the depth camera noise points.
Further, the step S2 includes the following sub-steps,
s21, calibrating external parameters between the laser radar and the depth camera, wherein the external parameters comprise a rotation matrix R and a translational vector t from the depth camera to a laser radar coordinate system; preparing a black and white checkerboard, collecting a camera image and a laser radar point cloud of which complete checkerboard can be observed by a plurality of depth cameras and laser radars, matching the camera image and the laser radar point cloud approximately and synchronously according to time, leading the camera image and the laser radar point cloud into a Lidar toolkit in Matlab, enabling the toolkit to detect the checkerboard in the image, manually appointing the point cloud falling on the checkerboard in a laser radar point cloud frame, and calibrating the toolkit to obtain a rotation matrix R and a translational vector t from the depth cameras to a laser radar coordinate system;
s22, constructing a conversion function from the depth camera coordinate system to the laser radar coordinate system, wherein the conversion function is,
Figure BDA0003789312060000031
wherein, X l ,Y l ,Z l Three coordinate axes, X, representing the lidar coordinate system l ,Y l Zl denotes the depth camera coordinate system;
s23, converting the point cloud of the depth camera into a laser radar coordinate system according to the rotation matrix R and the translation vector t from the depth camera to the laser radar coordinate system, which are obtained by calibration in the step S21, and the conversion function in the step S22;
and S24, determining the point clouds of the depth camera and the laser radar which belong to the same space range and the point clouds which do not belong to the same space range according to the field angles of the laser radar and the depth camera.
Further, the step S3 includes the following sub-steps,
s31, according to the fused field angle of the laser radar and each depth camera, carrying out wire harness division on the laser radar point cloud and each depth camera point cloud by using the vertical angular resolution of the laser radar according to the concept of a laser radar scanning line;
and S32, converting the depth camera point cloud re-divided in the step S31 into a laser radar format point cloud, fusing the laser radar point cloud again to form a fused point cloud with the same format, and outputting the fused point cloud.
Further, the step S31 includes the following sub-steps,
s311, complementing empty positions between laser radar wire harnesses by depth camera point clouds according to the point clouds, determined in the step S24, of the depth camera and the laser radar point clouds, wherein the point clouds belong to the same space range, and recalculating the point clouds belonging to the wire harnesses;
and S312, according to the point cloud determined in the step S24, wherein the point cloud of the depth camera and the point cloud of the laser radar do not belong to the same space range, the original definition of the line beam is reserved for the space only containing the point cloud of the laser radar, the line beam is expanded upwards and downwards at the angular resolution of the laser radar for the space only containing the point cloud of the depth camera, and the point cloud around the line beam is determined as the point cloud of the line beam.
Further, in the step S311, for the partition of the camera point cloud and the lidar point cloud line bundle in the same spatial range, the camera point cloud of the lidar line bundle angular resolution is divided into the line cloud point cloud according to the lidar resolution.
Further, in step S312, for the space only containing the depth camera point cloud, the camera point cloud with the angular resolution of the extended line beam plus or minus one-half of the laser radar is divided into the line beam point cloud according to the laser radar resolution.
Further, the step S32 includes the following sub-steps,
s321, after dividing the laser radar and the camera point cloud into the beam again according to the step S31, determining and distributing the beam information of each laser point and each depth camera point according to the divided beam;
s322, increasing point cloud reflection intensity information of the depth camera;
and S323, taking the timestamp issued by the laser radar point cloud as the timestamp of the fused point cloud.
Further, in the step S322, for the depth camera point cloud in the same spatial range as the lidar point cloud, determining the reflection intensity according to the reflection intensity of the nearest lidar point of the depth camera; for depth camera point clouds in different space ranges from the laser radar point clouds, the reflection intensity of the laser radar irradiating the ground is unified to be distributed as the depth camera point clouds to obtain the reflection intensity, and the reflection intensity of the laser radar irradiating the white wall is unified to be distributed as the depth camera point clouds to obtain the reflection intensity information of the camera point clouds above the laser radar area.
Further, the step S4 includes the following sub-steps,
s41, preprocessing the fused point cloud, removing nonlinear motion distortion of the fused point cloud by inertial navigation assistance, and performing compensation correction on the point cloud in each frame;
s42, extracting the characteristics of the fused point cloud, and filtering non-characteristic point data by extracting the surface characteristics and the line characteristics of the fused point cloud, wherein the surface characteristics comprise ground characteristics and non-ground characteristics;
s43, carrying out frame image matching on the point cloud by an NDT (normalized difference transform) matching method to obtain motion pose information between two frame point clouds of the four-legged robot, wherein an initial point cloud image is first frame fused point cloud data until a point cloud key frame is accumulated to five frames, then gradually eliminating the previous point cloud frame, and determining the point cloud data as key frame point cloud data when the pose change of inertial navigation integration exceeds a certain threshold; the NDT assumes that the point cloud follows normal distribution, and assumes that the point cloud obtained by current scanning is
Figure BDA0003789312060000051
By spatial transfer function
Figure BDA0003789312060000052
To represent the use pose
Figure BDA0003789312060000053
To move
Figure BDA0003789312060000054
The target-maximizing likelihood function is such that,
Figure BDA0003789312060000055
carrying out nonlinear solution on the function to obtain the optimal matching pose, carrying out motion state update of iterative Kalman filtering, fusing information of inertial navigation pre-integration and information of interframe matching, and estimating zero offset errors of an accelerometer and a gyroscope in inertial navigation in time;
and S44, updating and storing the point cloud picture after each matching until the establishment of the point cloud picture of the whole environment is completed, and performing voxel filtering processing on the finally generated point cloud map.
Further, the step S5 includes the following sub-steps,
s51, matching the point cloud subjected to fusion processing in the S2 and the global point cloud map by using a multi-thread NDT matching method to obtain the global pose of the quadruped robot in the environment;
s52, obtaining a smooth quadruped robot pose according to inertial navigation integral prediction and foot type odometer prediction, and performing truth value estimation on a predicted value and an observed value of a state vector according to a Kalman filtering formula, wherein the predicted value is the pose of the inertial navigation integral and the pose of the foot type odometer, and the observed value is the global pose matched with the real-time fusion point cloud in the step S51; and predicting and updating the pose of the quadruped robot after each frame of IMU and foot type odometer data are transmitted, correcting the pose of the quadruped robot by taking the matching pose of the point cloud as an estimation true value, and finally obtaining the pose of the quadruped robot in the convertor station.
The invention has the advantages that 1) the characteristics of the laser radar and the depth camera are fully utilized, accurate positioning information can be obtained, and the method can adapt to the complex environment of the convertor station; 2) Compared with the positioning technology of laser point cloud fusion images, the method has the advantages that the used computing resources are less, and the algorithm can be carried on a common computing platform; 3) The global position and pose are obtained by matching the laser radar point cloud and the three-dimensional environment map in real time, and then the IMU and the foot-type odometer are fused to obtain the high-frequency four-foot robot position and pose, so that the repeated positioning precision of the four-foot robot in the convertor station is within 6cm, and the inspection requirement is met.
Drawings
Fig. 1 is a schematic diagram of a four-legged robot, a depth camera, and a lidar according to an embodiment of the present invention.
Fig. 2 is a schematic diagram illustrating a frequency and a time stamp of issuing a laser radar point cloud to issue a fusion point cloud according to an embodiment of the present invention.
Fig. 3 is a multi-sensor fusion SLAM mapping framework according to an embodiment of the invention.
Fig. 4 is a cloud map of a converter station according to a first embodiment of the present invention.
Fig. 5 is a block diagram of a multi-sensor fusion positioning in accordance with an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A mapping and positioning method fusing point clouds of a laser radar and a depth camera comprises the following steps:
s1, laser point clouds and depth camera point clouds to be processed are obtained, laser points and depth camera points in an inaccurate measurement range are removed, and noise points in the laser point clouds and the depth camera point clouds are removed, wherein the laser point clouds and the depth point clouds have point clouds which belong to the same spatial range and do not belong to the same spatial range;
the above-mentioned step S1 includes the following sub-steps,
s11, a laser radar for collecting the initial laser point cloud and a depth camera for collecting the initial depth point cloud are installed on the same equipment; setting a device for synchronizing the time of collecting the point cloud of the laser radar and the point cloud of the depth camera;
s12, the laser radar and the depth camera have complementary effects on the accuracy of depth measurement, the laser radar has an advantage in measuring the depth of a farther object, and the depth camera has an advantage in measuring the depth of a closer object; therefore, the inaccurate depth point cloud measured in the set range of the depth camera is filtered out according to the characteristics of the adopted depth camera;
s13, filtering laser radar point clouds which are inaccurate in measurement of the laser radar within a set range and point clouds which are hit on laser radar equipment according to the characteristics of the adopted laser radar;
and S14, filtering the laser radar and the depth camera noise points.
S2, performing external reference calibration on the laser radar and the depth camera, determining the coordinate axis relation between the laser radar and the depth camera, and converting point cloud of the depth camera into a laser radar coordinate system;
the above-mentioned step S2 includes the following sub-steps,
s21, calibrating external parameters between the laser radar and the depth camera, wherein the external parameters comprise a rotation matrix R and a translational vector t from the depth camera to a laser radar coordinate system; preparing a black and white checkerboard, collecting a camera image and a laser radar point cloud of which complete checkerboard can be observed by a plurality of depth cameras and laser radars, matching the camera image and the laser radar point cloud approximately and synchronously according to time, leading the camera image and the laser radar point cloud into a Lidar toolkit in Matlab, enabling the toolkit to detect the checkerboard in the image, manually appointing the point cloud falling on the checkerboard in a laser radar point cloud frame, and calibrating the toolkit to obtain a rotation matrix R and a translational vector t from the depth cameras to a laser radar coordinate system;
s22, constructing a conversion function from the depth camera coordinate system to the laser radar coordinate system, wherein the conversion function is,
Figure BDA0003789312060000091
wherein, X l ,Y l ,Z l Three coordinate axes, X, representing the lidar coordinate system c ,Y c ,Z c Representing a depth camera coordinate system;
s23, converting the point cloud of the depth camera into a laser radar coordinate system according to the rotation matrix R and the translation vector t from the depth camera to the laser radar coordinate system, which are obtained by the calibration in the step S21, and the conversion function in the step S22;
and S24, determining the point clouds of the depth camera and the laser radar which belong to the same space range and the point clouds which do not belong to the same space range according to the field angles of the laser radar and the depth camera.
S3, according to the field angle of the depth camera and the laser radar, re-dividing the laser radar and the depth camera point cloud of the laser radar coordinate system obtained by conversion in the step 2 according to a scanning line of the laser radar, converting the depth camera point cloud according to the point cloud format of the laser radar, and fusing to form a frame of point cloud with the same format;
the above-mentioned step S3 includes the following sub-steps,
s31, according to the fused field angle of the laser radar and each depth camera, carrying out wire harness division on the laser radar point cloud and each depth camera point cloud by using the vertical angular resolution of the laser radar according to the concept of a laser radar scanning line;
s311, complementing the empty positions between laser radar wire bundles with the depth camera point cloud according to the point cloud determined in S24, wherein the point cloud belongs to the same space range as the laser radar point cloud, and recalculating the point cloud belonging to the wire bundle;
in the step S311, for the segmentation of the camera point cloud and the laser radar point cloud line bundle in the same spatial range, the camera point cloud with the angular resolution of plus or minus one-half of the laser radar line bundle is divided into the line bundle point cloud according to the laser radar resolution;
s312, according to the point cloud determined in the step S24, wherein the point cloud of the depth camera and the point cloud of the laser radar do not belong to the same space range, the original definition of the wire harness is reserved for the space only containing the point cloud of the laser radar, the wire harness is expanded upwards and downwards according to the angular resolution of the laser radar for the space only containing the point cloud of the depth camera, and the point cloud around the wire harness is determined as the point cloud of the wire harness;
in the step S312, for the space only containing the depth camera point cloud, the camera point cloud with the angular resolution of the extended line beam plus or minus one-half of the laser radar is divided into the line beam point cloud according to the laser radar resolution.
S32, converting the depth camera point cloud re-divided in the step S31 into a laser radar format point cloud, fusing the laser radar point cloud again to form a fused point cloud with the same format, and outputting the fused point cloud;
the step S31 includes the substeps of S321, determining and allocating the beam information of each laser point and depth camera point according to the divided beam after re-dividing the beam for the laser radar and camera point cloud according to the step S31;
s322, increasing point cloud reflection intensity information of the depth camera;
in the step S322, for the point cloud of the depth camera in the same space range as the point cloud of the laser radar, the reflection intensity is determined according to the reflection intensity of the laser radar point closest to the depth camera; for depth camera point clouds in different space ranges from the laser radar point clouds, the reflection intensity of the laser radar irradiating the ground is unified for distributing the reflection intensity to the depth camera point clouds in the camera point clouds below the laser radar area, and the reflection intensity of the laser radar irradiating the white wall is unified for distributing the reflection intensity information to the depth camera point clouds in the camera point clouds above the laser radar area;
and S323, taking the timestamp issued by the laser radar point cloud as the timestamp of the fused point cloud.
S4, based on the point cloud fused in the S3 and IMU sensor data fusion, carrying out environment mapping to obtain an environment point cloud map;
the above-mentioned step S4 includes the following sub-steps,
s41, preprocessing the fused point cloud, removing nonlinear motion distortion of the fused point cloud by inertial navigation assistance, and performing compensation correction on the point cloud in each frame; the fused point cloud is received after being reflected by the reflector, the quadruped robot moves for a certain distance in the period of transmitting to the received point cloud, and the coordinates of the received point can not truly reflect the real relative position of the obstacle and the robot. Therefore, the pose change of the quadruped robot in the period of time needs to be integrated through inertial navigation, and then the pose of the point is superposed with the walking pose of the quadruped robot integrated through inertial navigation to finally obtain the fusion point cloud with motion distortion filtered. Carrying out voxel filtering and deviation point filtering on the point clouds subjected to distortion removal, reducing the data volume of each frame of point cloud, and ensuring the accuracy of the point cloud data;
s42, extracting the features of the fused point cloud, filtering non-feature point data by extracting the surface features and the line features of the fused point cloud, wherein the surface features comprise ground features and non-ground features, reducing the data amount of point cloud frame matching, and accelerating the point cloud matching;
s43, carrying out frame image matching on the point clouds by an NDT (normalized difference test) matching method to obtain motion pose information between two frame point clouds of the quadruped robot, wherein the initial point cloud image is the first frame of fused point cloud data until the point clouds are subjected to frame image matching until the initial point cloud image is the first frame of fused point cloud dataAccumulating the point cloud key frames to five frames, gradually eliminating the previous point cloud frames, and determining point cloud data as key frame point cloud data when the pose change of the inertial navigation integral exceeds a certain threshold value; the NDT assumes that the point cloud obeys normal distribution, and in order to find a pose to maximize the probability that the current frame is located in the matching point cloud picture, the point cloud obtained by current scanning is assumed to be
Figure BDA0003789312060000111
By spatial transfer function
Figure BDA0003789312060000112
To represent usage pose transformation
Figure BDA0003789312060000113
To move
Figure BDA0003789312060000121
The target-maximizing likelihood function is such that,
Figure BDA0003789312060000122
carrying out nonlinear solving on the function to obtain the optimal matching pose, carrying out motion state updating of iterative Kalman filtering, fusing information of inertial navigation pre-integration and information of interframe matching, estimating zero offset errors of an accelerometer and a gyroscope in inertial navigation in time, and estimating that the zero offset error needs to be subtracted before the inertial navigation is integrated so as to reduce accumulated errors in the inertial navigation integration process;
s44, updating and storing the point cloud map after each matching until the establishment of the point cloud map of the whole environment is completed, and performing voxel filtering processing on the finally generated point cloud map to reduce the data volume of the whole point cloud map;
s5, matching the point cloud image established in the S4 based on the point cloud image fused in the S3 to obtain the absolute pose of the quadruped robot in the environment, and fusing the IMU sensor and the foot-type odometer to obtain the pose of the quadruped robot in the motion process;
the step S5 comprises the following substeps of S51, matching the point cloud fused in the step S2 with a global point cloud map by using a multi-thread NDT matching method to obtain the global pose of the quadruped robot in the environment;
s52, obtaining a smooth quadruped robot pose according to inertial navigation integral prediction and foot type odometer prediction, and performing truth value estimation on a predicted value and an observed value of a state vector according to a Kalman filtering formula, wherein the predicted value is the pose of the inertial navigation integral and the pose of the foot type odometer, and the observed value is the global pose matched with the real-time fusion point cloud in the step S51; and predicting and updating the pose of the quadruped robot after each frame of IMU and foot type odometer data are transmitted, correcting the pose of the quadruped robot by taking the matching pose of the point cloud as an estimation true value, and finally obtaining the high-frequency stable pose of the quadruped robot in the convertor station.
Example one
As shown in figure 1, a 30-degree downward-facing realsense D435i depth camera, a 10-degree upward-facing realsense D455 depth camera and a Robosense RS-LiDAR-16 line laser radar are respectively arranged in front of and behind the quadruped robot, and the four sensors are connected to a computing platform of the quadruped robot. According to the characteristics of the realsense D435i depth camera, the range of more accurate depth point cloud measurement is 0.28m to 2m, therefore, point clouds which are not accurately measured outside the range are filtered out when the point clouds of the depth camera are read, and according to the characteristics of the Robosense RS-LiDAR-16 line laser radar, the point clouds which are not accurately measured outside the range are filtered out within the range of 2.0m to 80 m. It can be seen that the depth camera and the laser radar have complementary advantages in the range finding range, and the observation field of view of the quadruped robot is further improved through the arrangement of the installation angle of the depth camera.
And calibrating the coordinate axis relation between the three depth cameras on the quadruped robot and the laser radar through the method in the step S21, and converting the point cloud of the three depth cameras into a laser radar coordinate system according to the step S23. The field angle of the laser radar is 30 degrees, the vertical angle resolution is 2 degrees, the field angle of the depth camera is 58 degrees, and the point clouds of the depth camera and the laser radar which belong to the same space range and the point clouds which do not belong to the same space range can be determined according to the field angles of the laser radar and the depth camera and the installation angles of the laser radar and the depth camera in the four-footed robot. The wire harnesses of the RoboSense RS-LiDAR-16 line laser radar are distributed at an angular resolution of 2 degrees, and for the point cloud of which the depth camera and the laser radar belong to the same spatial range, the point cloud of the depth camera within the range of +/-1 degree of the laser radar wire harness is divided into the wire harness point cloud, and vacant points among the laser radar wire harnesses are complemented. And for the space only containing the depth camera point cloud, dividing the camera point cloud with the expansion pencil of +/-1 degrees into the new pencil point cloud according to the vertical angle resolution of 2 degrees of the laser radar. After the beam is re-divided, the beam is finally expanded to 52 lines.
And after dividing the beam, converting the depth camera point cloud into a laser radar format point cloud, and firstly giving information of the beam to the depth camera point cloud according to the newly divided beam. The method comprises the steps of increasing reflection intensity information of a depth camera point cloud, determining the reflection intensity according to the reflection intensity of a laser radar point closest to a depth camera for the depth camera point cloud in the same space range as the laser radar point cloud, distributing the reflection intensity for the depth camera point cloud by unifying the reflection intensity of the laser radar irradiating the ground for the camera point cloud under a laser radar area for the depth camera point cloud in different space ranges of the laser radar point cloud, distributing the reflection intensity information for the depth camera point cloud by unifying the reflection intensity of the laser radar irradiating a white wall for the camera point cloud above the laser radar area.
And finally, issuing the fused point cloud by using the frequency and the time stamp issued by the laser radar point cloud, as shown in fig. 2.
And finally, mapping and positioning the fusion point cloud:
a mapping framework of the intelligent quadruped robot is constructed by fusing a laser radar, a depth camera and an IMU sensor, and the whole framework mainly comprises a sensor data receiving and preprocessing module, a point cloud feature extraction module, a laser odometer module, a loop detection module and a mapping module. The sensor data receiving and preprocessing module is divided into point cloud fusion of a depth camera and a laser radar and distortion removal of the point cloud, and the laser odometer module is divided into two parts of point cloud matching and IEKF data fusion.
Fig. 3 shows a multi-sensor fusion framework, which is mainly divided into the following five steps:
1) And receiving and preprocessing sensor data. Point cloud data of a laser radar and a depth camera are fused, the fused point cloud data and IMU inertial navigation data are synchronized, points in one frame of point cloud are not acquired at the same time, the point cloud can generate distortion in the motion process of the quadruped robot, the relative rotation angle and the relative displacement of the quadruped robot are calculated according to the IMU inertial navigation data, distortion compensation is carried out on each frame of point cloud, meanwhile, the IMU inertial navigation data are subjected to pre-integration and transmitted into an IEKF filter to be used as prediction of the motion state of the quadruped robot;
2) And (4) extracting features. The characteristic extraction of the point cloud is mainly used for registering point cloud frames, and the aim of quick and accurate matching is fulfilled by extracting surface characteristics and line characteristics, wherein the surface characteristics comprise ground characteristics and non-ground characteristics, filtering a large amount of non-characteristic point data, and reducing the data amount of point cloud frame matching;
3) And (4) a laser odometer. Carrying out frame image matching on the point cloud by an NDT (Normal distribution Transform) matching method to obtain motion pose information between two frame point clouds of the four-legged robot, updating the motion state of the IEKF after obtaining the motion pose information, predicting the pose between the two frames of the four-legged robot by odometer information of IMU inertial navigation pre-integration, and estimating the zero offset error of the IMU inertial navigation;
4) And (5) loop detection. And finding candidate closed-loop matching frames in the historical key frames, executing the matching of the frames and the point cloud map, and obtaining pose transformation, thereby optimizing the integral point cloud map.
5) And (5) establishing a graph. The generated local point cloud map is provided for a laser odometer to carry out frame map matching, the finally generated point cloud map is processed by filtering and the like and is stored into a format required by the global positioning of the quadruped robot,
finally, a converter station environment map established by the multi-sensor fusion SLAM building framework is shown in FIG. 4, and it can be seen that the expression of the map on the environment is very accurate.
Multi-sensor fusion positioning framework and implementation
After the global map of the converter station is established, in order to realize the intelligent inspection of the intelligent quadruped robot in the converter station, the accurate pose of the quadruped robot in the converter station needs to be obtained in real time. The global pose is obtained by matching the fused point cloud of the laser radar and the depth camera with the three-dimensional environment point cloud map of the convertor station, and the high-frequency four-footed robot pose is obtained by fusing the IMU and the foot-type odometer through the UKF. The specific framework is shown in fig. 5 as follows:
the multi-sensor fusion positioning framework of the intelligent quadruped robot is mainly divided into two steps of point cloud matching pose and UKF data fusion.
1) And matching the pose with the point cloud. Firstly, point cloud fusion is carried out on a laser radar and a camera, distortion compensation is carried out on the fusion point cloud according to a method in a mapping framework, and then the fusion point cloud subjected to distortion compensation and a global point cloud map are matched by utilizing a multi-thread NDT matching method to obtain the real-time pose of the quadruped robot in the environment;
2) And (4) fusing UKF data. Generating 2n +1 sigma points in the vicinity of the state vector. And predicting sigma points of the observed quantity by using the state transition matrix, and calculating a mean value and a covariance matrix of the state vector according to the weight. And obtaining a predicted measurement value of the sigma point by using the measurement matrix. And obtaining a predicted value of the state vector according to the sigma point and the weight. And performing truth value estimation on the predicted value and the observed value of the state vector according to a Kalman Filtering (KF) formula. And performing prediction updating on the pose of the quadruped robot once after each frame of IMU and foot type odometer data are transmitted, correcting the pose of the quadruped robot by taking the matching pose of the point cloud as an estimation true value, and finally obtaining the high-frequency stable pose of the quadruped robot in the convertor station.
The above detailed description is intended to illustrate the present invention, not to limit the present invention, and any modifications and changes made within the spirit of the present invention and the scope of the claims fall within the scope of the present invention.

Claims (11)

1. A mapping and positioning method fusing point clouds of a laser radar and a depth camera is characterized by comprising the following steps:
s1, acquiring laser point clouds and depth camera point clouds to be processed, wherein the laser point clouds and the depth camera point clouds comprise point clouds which belong to the same spatial range and point clouds which do not belong to the same spatial range;
s2, performing external reference calibration on the laser radar and the depth camera, determining the coordinate axis relation between the laser radar and the depth camera, and converting point cloud of the depth camera into a laser radar coordinate system;
s3, according to the field angles of the depth camera and the laser radar, re-segmenting the laser radar and the depth camera point cloud of the laser radar coordinate system obtained through conversion according to a laser radar scanning line, converting the depth camera point cloud according to the point cloud format of the laser radar, and fusing to form a frame of point cloud with the same format;
s4, based on the point cloud fused in the S3 and IMU sensor data fusion, carrying out environment mapping to obtain an environment point cloud map;
and S5, matching the point cloud image established in the S4 based on the point cloud image fused in the S3 to obtain the absolute pose of the quadruped robot in the environment, and fusing the IMU sensor and the foot type odometer to obtain the pose of the quadruped robot in the motion process.
2. The method of claim 1, wherein the method comprises the steps of: the step S1 comprises the sub-steps of,
s11, a device for synchronizing the time of collecting the point cloud of the laser radar and the point cloud of the depth camera is arranged;
s12, filtering depth point clouds with inaccurate measurement in a set range of the depth camera;
s13, filtering laser radar point clouds with inaccurate measurement of the laser radar in a set range and point clouds on equipment for installing the laser radar;
and S14, filtering the laser radar and the depth camera noise points.
3. The method of claim 1, wherein the method comprises the steps of: said step S2 comprises the sub-steps of,
s21, calibrating external parameters between the laser radar and the depth camera, wherein the external parameters comprise a rotation matrix R and a translational vector t from the depth camera to a laser radar coordinate system; preparing a black and white checkerboard, collecting a camera image and a laser radar point cloud of which complete checkerboard can be observed by a plurality of depth cameras and laser radars, matching the camera image and the laser radar point cloud approximately and synchronously according to time, leading the camera image and the laser radar point cloud into a Lidar toolkit in Matlab, enabling the toolkit to detect the checkerboard in the image, manually appointing the point cloud falling on the checkerboard in a laser radar point cloud frame, and calibrating the toolkit to obtain a rotation matrix R and a translational vector t from the depth cameras to a laser radar coordinate system;
s22, constructing a conversion function from the depth camera coordinate system to the laser radar coordinate system, wherein the conversion function is,
Figure FDA0003789312050000021
wherein, X l ,Y l ,Z l Three coordinate axes, X, representing the lidar coordinate system c ,Y c ,Z c Representing a depth camera coordinate system;
s23, converting the point cloud of the depth camera into a laser radar coordinate system according to the rotation matrix R and the translation vector t from the depth camera to the laser radar coordinate system, which are obtained by the calibration in the step S21, and the conversion function in the step S22;
and S24, determining the point clouds of the depth camera and the laser radar which belong to the same space range and the point clouds which do not belong to the same space range according to the field angles of the laser radar and the depth camera.
4. The method of claim 3, wherein the method comprises the steps of: said step S3 comprises the sub-steps of,
s31, according to the fused field angle of the laser radar and each depth camera, carrying out wire harness division on the laser radar point cloud and each depth camera point cloud by using the vertical angular resolution of the laser radar according to the concept of a laser radar scanning line;
and S32, converting the depth camera point cloud re-divided in the step S31 into a laser radar format point cloud, fusing the laser radar point cloud again to form a fused point cloud with the same format, and outputting the fused point cloud.
5. The method of claim 4, wherein the method comprises the steps of: the step S31 includes the sub-steps of,
s311, complementing empty positions between laser radar wire harnesses by depth camera point clouds according to the point clouds, determined in the step S24, of the depth camera and the laser radar point clouds, wherein the point clouds belong to the same space range, and recalculating the point clouds belonging to the wire harnesses;
and S312, according to the point cloud determined in the step S24, wherein the point cloud of the depth camera and the point cloud of the laser radar do not belong to the same space range, the original definition of the line beam is reserved for the space only containing the point cloud of the laser radar, the line beam is expanded upwards and downwards at the angular resolution of the laser radar for the space only containing the point cloud of the depth camera, and the point cloud around the line beam is determined as the point cloud of the line beam.
6. The method of claim 5, wherein the method comprises the steps of: in the step S311, for the segmentation of the camera point cloud and the laser radar point cloud line bundle in the same spatial range, the camera point cloud with the angular resolution of plus or minus one-half of the laser radar line bundle is divided into the line bundle point cloud according to the laser radar resolution.
7. The method of claim 5, wherein the method comprises the steps of: in the step S312, for the space only containing the depth camera point cloud, the camera point cloud with the angular resolution of the extended beam plus or minus one half of the lidar angle resolution is divided into the beam point cloud according to the lidar resolution.
8. The method of claim 4, wherein the method comprises the steps of: the step S32 includes the sub-steps of,
s321, after dividing the laser radar and the camera point cloud into the beam again according to the step S31, determining and distributing the beam information of each laser point and each depth camera point according to the divided beam;
s322, increasing point cloud reflection intensity information of the depth camera;
and S323, taking the timestamp issued by the laser radar point cloud as the timestamp of the fused point cloud.
9. The method of claim 8, wherein the method comprises: in the step S322, for the point cloud of the depth camera in the same space range as the point cloud of the laser radar, the reflection intensity is determined according to the reflection intensity of the laser radar point closest to the depth camera; for depth camera point clouds in different space ranges from the laser radar point clouds, the reflection intensity of the laser radar irradiating the ground is unified for distributing the reflection intensity to the depth camera point clouds in the camera point clouds below the laser radar area, and the reflection intensity of the laser radar irradiating the white wall is unified for distributing the reflection intensity information to the depth camera point clouds in the camera point clouds above the laser radar area.
10. The method of claim 1, wherein the method comprises the steps of: said step S4 comprises the sub-steps of,
s41, preprocessing the fused point cloud, removing nonlinear motion distortion of the fused point cloud by using inertial navigation assistance, and performing compensation correction on the point cloud in each frame;
s42, extracting the features of the fused point cloud, and filtering non-feature point data by extracting surface features and line features of the fused point cloud, wherein the surface features comprise ground features and non-ground features;
s43, carrying out frame image matching on the point clouds by an NDT (normalized difference test) matching method to obtain motion pose information between two frame point clouds of the quadruped robot, wherein the initial point cloud image is the first frame of fused point cloud data and is straightAccumulating the point cloud key frames to five frames, gradually eliminating the previous point cloud frames, and determining point cloud data as key frame point cloud data when the pose change of the inertial navigation integral exceeds a certain threshold; the NDT assumes that the point cloud follows normal distribution, and assumes that the point cloud obtained by current scanning is
Figure FDA0003789312050000051
By spatial transfer functions
Figure FDA0003789312050000052
To represent usage pose transformation
Figure FDA0003789312050000053
To move
Figure FDA0003789312050000054
The target-maximizing likelihood function is such that,
Figure FDA0003789312050000055
carrying out nonlinear solving on the function to obtain an optimal matching pose, carrying out motion state updating of iterative Kalman filtering, fusing inertial navigation pre-integral information and interframe matching information, and estimating zero offset errors of an accelerometer and a gyroscope in inertial navigation in time;
and S44, updating and storing the point cloud picture after each matching until the establishment of the point cloud picture of the whole environment is completed, and performing voxel filtering processing on the finally generated point cloud map.
11. The method of claim 10, wherein the method comprises: the step S5 includes the sub-steps of,
s51, matching the point cloud subjected to fusion processing in the S2 and the global point cloud map by using a multi-thread NDT matching method to obtain the global pose of the quadruped robot in the environment;
s52, obtaining a smooth quadruped robot pose according to inertial navigation integral prediction and foot type odometer prediction, and performing truth value estimation on a predicted value and an observed value of a state vector according to a Kalman filtering formula, wherein the predicted value is the pose of the inertial navigation integral and the pose of the foot type odometer, and the observed value is the global pose matched with the real-time fusion point cloud in the step S51; and predicting and updating the pose of the quadruped robot after each frame of IMU and foot type odometer data are transmitted, correcting the pose of the quadruped robot by taking the matching pose of the point cloud as an estimation true value, and finally obtaining the pose of the quadruped robot in the convertor station.
CN202210951441.4A 2022-08-09 2022-08-09 Mapping and positioning method fusing point clouds of laser radar and depth camera Pending CN115330866A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210951441.4A CN115330866A (en) 2022-08-09 2022-08-09 Mapping and positioning method fusing point clouds of laser radar and depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210951441.4A CN115330866A (en) 2022-08-09 2022-08-09 Mapping and positioning method fusing point clouds of laser radar and depth camera

Publications (1)

Publication Number Publication Date
CN115330866A true CN115330866A (en) 2022-11-11

Family

ID=83921761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210951441.4A Pending CN115330866A (en) 2022-08-09 2022-08-09 Mapping and positioning method fusing point clouds of laser radar and depth camera

Country Status (1)

Country Link
CN (1) CN115330866A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115755901A (en) * 2022-11-14 2023-03-07 杭州蓝芯科技有限公司 Mobile robot obstacle stopping control method and device
CN116358517A (en) * 2023-02-24 2023-06-30 杭州宇树科技有限公司 Height map construction method, system and storage medium for robot
CN116839570A (en) * 2023-07-13 2023-10-03 安徽农业大学 Crop interline operation navigation method based on sensor fusion target detection
CN117928680A (en) * 2024-03-21 2024-04-26 青岛清万水技术有限公司 Automatic positioning method and system for transducer, electronic equipment and storage medium
CN117953384A (en) * 2024-03-27 2024-04-30 昆明理工大学 Cross-scene multispectral laser radar point cloud building extraction and vectorization method
CN117928680B (en) * 2024-03-21 2024-06-07 青岛清万水技术有限公司 Automatic positioning method and system for transducer, electronic equipment and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115755901A (en) * 2022-11-14 2023-03-07 杭州蓝芯科技有限公司 Mobile robot obstacle stopping control method and device
CN116358517A (en) * 2023-02-24 2023-06-30 杭州宇树科技有限公司 Height map construction method, system and storage medium for robot
CN116358517B (en) * 2023-02-24 2024-02-23 杭州宇树科技有限公司 Height map construction method, system and storage medium for robot
CN116839570A (en) * 2023-07-13 2023-10-03 安徽农业大学 Crop interline operation navigation method based on sensor fusion target detection
CN116839570B (en) * 2023-07-13 2023-12-01 安徽农业大学 Crop interline operation navigation method based on sensor fusion target detection
CN117928680A (en) * 2024-03-21 2024-04-26 青岛清万水技术有限公司 Automatic positioning method and system for transducer, electronic equipment and storage medium
CN117928680B (en) * 2024-03-21 2024-06-07 青岛清万水技术有限公司 Automatic positioning method and system for transducer, electronic equipment and storage medium
CN117953384A (en) * 2024-03-27 2024-04-30 昆明理工大学 Cross-scene multispectral laser radar point cloud building extraction and vectorization method
CN117953384B (en) * 2024-03-27 2024-06-07 昆明理工大学 Cross-scene multispectral laser radar point cloud building extraction and vectorization method

Similar Documents

Publication Publication Date Title
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN115330866A (en) Mapping and positioning method fusing point clouds of laser radar and depth camera
CN110243358B (en) Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
CN112987065B (en) Multi-sensor-integrated handheld SLAM device and control method thereof
CN111929699A (en) Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system
WO2022127532A1 (en) Method and apparatus for calibrating external parameter of laser radar and imu, and device
WO2023131123A1 (en) External parameter calibration method and apparatus for combined navigation device and laser radar
CN112465732A (en) Registration method of vehicle-mounted laser point cloud and sequence panoramic image
CN113947639B (en) Self-adaptive online estimation calibration system and method based on multi-radar point cloud line characteristics
CN113763548B (en) Vision-laser radar coupling-based lean texture tunnel modeling method and system
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN115479598A (en) Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN112767461A (en) Automatic registration method for laser point cloud and sequence panoramic image
CN115451964B (en) Ship scene simultaneous mapping and positioning method based on multi-mode mixing characteristics
CN112799096A (en) Map construction method based on low-cost vehicle-mounted two-dimensional laser radar
CN113763549A (en) Method, device and storage medium for simultaneous positioning and mapping by fusing laser radar and IMU
CN114217665A (en) Camera and laser radar time synchronization method, device and storage medium
CN115639547A (en) Multi-line laser radar and GNSS-INS combined calibration method, system and medium
CN114413887A (en) Method, equipment and medium for calibrating external parameters of sensor
CN116295279A (en) Unmanned aerial vehicle remote sensing-based building mapping method and unmanned aerial vehicle
CN115728753A (en) External parameter calibration method and device for laser radar and integrated navigation and intelligent vehicle
CN113959437A (en) Measuring method and system for mobile measuring equipment
Zhao et al. Updating a digital geographic database using vehicle-borne laser scanners and line cameras
CN117541655A (en) Method for eliminating radar map building z-axis accumulated error by fusion of visual semantics
CN113739767A (en) Method for producing orthoimage aiming at image acquired by domestic area array swinging imaging system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination