CN115439621A - Three-dimensional map reconstruction and target detection method for coal mine underground inspection robot - Google Patents

Three-dimensional map reconstruction and target detection method for coal mine underground inspection robot Download PDF

Info

Publication number
CN115439621A
CN115439621A CN202210954381.1A CN202210954381A CN115439621A CN 115439621 A CN115439621 A CN 115439621A CN 202210954381 A CN202210954381 A CN 202210954381A CN 115439621 A CN115439621 A CN 115439621A
Authority
CN
China
Prior art keywords
point cloud
target
map
point
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210954381.1A
Other languages
Chinese (zh)
Inventor
高珺
巨朝晖
刘再斌
李旭
刘晨光
王炳
薛宇强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Research Institute Co Ltd of CCTEG
Original Assignee
Xian Research Institute Co Ltd of CCTEG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Research Institute Co Ltd of CCTEG filed Critical Xian Research Institute Co Ltd of CCTEG
Priority to CN202210954381.1A priority Critical patent/CN115439621A/en
Publication of CN115439621A publication Critical patent/CN115439621A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a three-dimensional map reconstruction and target detection method for a coal mine underground inspection robot, which comprises the following steps: step 1, acquiring a point cloud data set containing selected scene information; step 2, point cloud data is subjected to point cloud registration by adopting a Map-to-Map method to obtain an established three-dimensional Map; step 3, positioning the routing inspection route of the routing inspection robot in real time according to the three-dimensional map; step 4, in the process of the inspection robot advancing, a fusion algorithm is utilized to carry out target detection; and 5, optimally matching the target detection results obtained in the step 4, realizing classification of the detection results, and preferentially outputting the detection results as the obstacle targets. The invention can also carry out effective high-precision map construction for the situation of complex environmental results; meanwhile, partial redundant data can be removed, the algorithm time complexity is reduced, and the image building efficiency is improved; the fusion algorithm takes time domain information into consideration, fuses the multi-frame detection results, and can effectively reduce false detection rate and missed detection rate caused by shielding.

Description

Three-dimensional map reconstruction and target detection method for underground coal mine inspection robot
Technical Field
The invention belongs to the technical field of inspection robots, and relates to a three-dimensional map reconstruction and target detection method for an underground inspection robot of a coal mine.
Background
Since 2016, the annual inspection robot purchase amount tends to exponentially rise, most enterprises realize the necessity and trend of intelligent inspection and replacement of manpower by robots, with the increasingly clear subdivision and scenarization of industries, customers are more and more customized according to the individuation of the customers, and in the rising development period of the trend, the demand of each year later is inevitably more. At present, a coal mine underground coal mine comprises a plurality of manually driven fuel vehicles or electric vehicles which are used for tasks such as material transportation, patrol inspection, personnel connection and the like, and the problems of low operation efficiency, high human resource consumption, high operation cost and the like exist. Meanwhile, the traditional inspection is performed in a manual mode, so that the labor intensity is high, the working efficiency is low, the detection quality is low, the personal safety is difficult to guarantee, the inspection workload is high, the severe environment is limited, and the quality and the guarantee quantity of the inspection work cannot be guaranteed. Therefore, the unmanned inspection robot is the best choice to replace the traditional manpower inspection.
The most core problem of the unmanned inspection robot is the positioning problem, and inaccurate positioning can cause great errors of decision control. The routing planning method of the inspection robot aims to find a collision-free optimal path from a starting point to a terminal point by using a proper path planning method, and in a controllable structured environment, the inspection robot has certain autonomy, can sense the surrounding environment, detect obstacles and plan a collision-free path to navigate to a destination. Document [1] describes the problem of SLAM (Simultaneous Localization and Mapping) path planning for inspection robots. Document [2] compares two SLAM schemes to derive the selection under different scenarios. Document [3] studies on optimizing map positioning accuracy and reducing computational complexity using a probabilistic method. And document [4] solves the direct matrix solving problem in nonlinear optimization by using sparse pose adjustment. After the research on the literature [5], it is found that the scanning matching problem in map positioning can be solved by adopting a Gaussian-Newton method, but the requirement on a sensor is high. Finally, a Cartogrer algorithm is analyzed through the research of a document [6], and accumulated errors are reduced by using closed-loop detection on the local subgraph and the global graph at the same time.
Meanwhile, the obstacle detection of the traditional unmanned inspection robot cannot meet the requirement for environment sensing, so that a multi-sensor fusion method is provided, and the obstacle detection accuracy, real-time performance and robustness are improved. The method comprises the steps of training an actual robot to collect data by utilizing a convolutional neural network (Faster R-CNN) algorithm model, selecting a center point set at the lower edge of a matrix frame in image target detection to be matched with a radar target detection target data point set, and removing target pseudo point pairs by utilizing an improved iterative closest point matching (ICP) algorithm, so that data fusion of an image and three-dimensional point cloud is realized, and the algorithm precision and efficiency are effectively improved. The method comprises the following steps of utilizing a laser radar to emit laser beams and receive target echoes, extracting echo points with the shortest distance to perform cluster analysis, and judging whether the echo points are obstacles or not; the image detection utilizes a YOLO network training data set to generate a target frame, the detection frame for radar target detection and the detection frame for camera target detection are fused, the percentage of the overlapping area of a boundary frame is used as a criterion for judging an obstacle, the decision layer fusion of a laser radar and a camera is realized, and the detection precision in the aspects of vehicles and pedestrians is improved by about 5%.
In conclusion, how to construct a high-precision map and check obstacles is a hot research problem of unmanned inspection at present. The main problems in the prior art are as follows: the construction of a high-precision map is limited by a sensor and an algorithm, and the map construction and positioning operation is difficult to perform in an environment with a complex structure; conventional obstacle detection has a large amount of acquired data and redundant data, which results in inaccurate estimation of the position of an obstacle in the real world.
The literature referred to is as follows:
[1] chenzhao, suweihua, miaoning, et al. implementation of mobile robot SLAM with path planning under the ROS framework [ J ]. Medical and health equipment, 2017,38 (02): 109-113.
[2] Koelreuterin, zhang Wei, sunwei, etc. high-precision positioning algorithm [ J ] based on two-dimensional code vision and laser radar fusion, 2021,41 (05): 1484-1491.
[3] Liuliwei, zhu Shunkang, li Xiuhua, and the like, low-cost mobile robot 2D SLAM algorithm map evaluation research [ J ]. Computer simulation, 2021,38 (04): 291-295+320.
[4] ZhaoLuo, guanzhi Wei, ziyin Yong, etc. SLAM system function optimization design based on single-line laser radar [ J ]. Chinese automobile, 2021 (02): 4-9+43.
[5] Korean hua.dynamic environment navigation study of mobile robots based on single line laser radar [ D ]. Harabin industry university, 2019.
[6] Jixing, research on synchronous positioning and composition algorithm based on three-dimensional laser point cloud segmentation matching [ D ]. University of electronic technology, 2020.
Disclosure of Invention
The invention aims to provide a three-dimensional map reconstruction and target detection method for a coal mine underground inspection robot, and aims to solve the technical problems of how to construct a high-precision map in a complex structure environment and how to accurately estimate obstacles in the high-precision map.
In order to achieve the purpose, the invention adopts the technical scheme that:
a three-dimensional map reconstruction and target detection method for a coal mine underground inspection robot comprises the following steps:
step 1, selecting a map construction scene, collecting and processing point cloud data of the selected scene, and acquiring a point cloud data set containing selected scene information;
step 2, point cloud registration is carried out on the point cloud data of the scene obtained in the step 1 by adopting a Map-to-Map method to obtain an established three-dimensional Map; the method specifically comprises the following substeps:
step 21, acquiring continuous n frames of point cloud data from the point cloud data set acquired in the step 1, establishing a local sub-map for the n frames of point cloud data, and taking the local sub-map as the current latest local sub-map;
step 22, acquiring continuous n frames of point cloud data from the remaining point cloud data in the point cloud data set again, and establishing a local subgraph Submap for the n frames of point cloud data;
step 23, performing loop detection in a backend process by using a global mode in a cartographer algorithm, and if a loop occurs, executing step 24; otherwise, abandoning the local subgraph established in the step 22 and returning to the step 22;
step 24, matching the local subgraph established in the step 22 with the current latest local subgraph to obtain a matched local subgraph, and updating the matched local subgraph to the current latest local subgraph; wherein, matching refers to overlapping two local subgraphs;
step 25, returning to execute the step 22 until all the point cloud data in the point cloud data set obtained in the step 1 are obtained, and obtaining the current latest local sub-map as the required three-dimensional map;
step 3, positioning the routing inspection route of the routing inspection robot in real time according to the three-dimensional map generated in the step 2;
step 4, carrying out target detection by using a fusion algorithm in the process of the traveling of the inspection robot;
and 5, optimally matching the target detection results obtained in the step 4, realizing classification of the detection results, and preferentially outputting the detection results as the obstacle targets.
Further, in the step 1, a reserved instruction is used for collecting laser point clouds through a 16-line laser radar, and a bag file containing selected scene information is generated; and then, carrying out noise reduction and filtering processing on the bag file to obtain a processed bag file, namely the point cloud data set containing the selected scene information.
Further, in step 23, the backend refers to a cartographer algorithm map building optimization operation part; the occurrence of the loop means that the similarity between the local sub-graph established in step 22 and the current latest local sub-graph is greater than a certain weight.
Further, in the step 24, the overlaying of the two local subgraphs means discarding redundant point cloud frame data in the two local subgraphs, and splicing the two local subgraphs after discarding the redundant data to obtain a new local subgraph.
Further, the step 3 includes the following sub-steps:
step 31, adopting an IMU inertial sensor to acquire pose information; outputting the csv file information through the instruction;
step 32, selecting a coordinate system of the IMU inertial sensor as a positioning operation coordinate system;
and step 33, performing the traveling route positioning operation of the inspection robot by using the csv file generated in the step 31 and the three-dimensional map generated in the step 2 and matching with a positioning part source code of a Cartogrer-SLAM algorithm.
Further, the step 4 comprises the following sub-steps:
step 41, performing noise reduction processing on the point cloud data acquired in the step 1 through statistical filtering, and outputting noise-reduced point cloud information;
42, collecting image information of the selected scene in the step 1 by using a RealsenseD435i camera, inputting the image information into a fusion algorithm CenterNet network for target detection, and outputting a target central point position and a target type C, wherein the type is known by a KIITI data set;
and 43, inputting the point cloud data output in the step 41 into a fusion algorithm CenterPiont network for target detection, outputting a target 3D detection frame, a distance and a type C, and extracting a central point of the 3D target detection frame.
7. The coal mine underground inspection robot three-dimensional map reconstruction and target detection method according to claim 1, wherein the step 5 comprises the following substeps:
step 51, respectively selecting the target central point position obtained in step 42 and the target 3D detection frame central point position obtained in step 43 to be paired into point pairs to be used as improved KNN algorithm input, and obtaining n in totaln point pairs; wherein the position of the target center point obtained in step 42 is represented as
Figure BDA0003790595860000041
The position of the center point of the target 3D detection frame obtained in step 43 is represented as
Figure BDA0003790595860000042
n is the number of the selected central points;
step 52, calculating the Euclidean distance between two central points in each point pair selected in the step 51;
step 53 of selecting the Euclidean distances of all the point pairs obtained in step 52
Figure BDA0003790595860000043
And the point pair with the minimum Euclidean distance of the corresponding n point pairs is used as an optimal point pair to obtain n optimal point pairs, and each optimal point pair is fused to be used as a detected obstacle target to obtain n obstacle targets.
The beneficial effects of the invention are:
(1) A drawing establishing part: based on a Cartogrer algorithm, a Map to Map method is provided for matching local sub-Map information, so that loop detection is performed, the matching degree of the loop detection is improved, meanwhile, redundant information among data frames is removed, and the Map precision and the Map building efficiency are improved. The method is more suitable for the map building of the mobile robot. After the algorithm is improved, the efficient high-precision map construction can be carried out on the condition that the environmental result is complex; meanwhile, partial redundant data can be removed, the algorithm time complexity is reduced, and the image building efficiency is improved;
(2) An object detection section: based on multi-sensor fusion, target detection and tracking are respectively carried out on the image and the point cloud original data, the detection result is optimally matched by utilizing an improved KNN algorithm, the unmatched image detection central points are further matched by combining the projection from the point cloud to the image, and finally the fusion result is preferentially output. The sensor fusion improves the reliability of data and improves the target detection precision; meanwhile, the fusion algorithm takes time domain information into consideration, and fuses multi-frame detection results, so that the false detection rate and the missing detection rate caused by shielding can be effectively reduced.
Drawings
FIG. 1 illustrates a prior art error in back-end loop detection due to a front-end matching error;
FIG. 2 is a comparison of mapping effects (point cloud registration maps) before and after the Cartogrer algorithm improvement;
FIG. 3 is a comparison of the effects of improved pre-and post-mapping;
FIG. 4 is a comparison of CPU occupancy before and after improvement for the same server;
FIG. 5 is a rear end closed loop test positioning effect under RVIZ;
FIG. 6 is a schematic view of an inspection robot;
FIG. 7 is a fusion algorithm framework;
FIG. 8 is a block diagram of a target center point matching structure;
FIG. 9 is a diagram showing the connection relationship between the center point set and the image detection center point;
FIG. 10 is an image detection test effect; wherein, (a) is a target shielding scene, (b) is a scene with uneven illumination, and (c) is a scene with a longer distance;
FIG. 11 is point cloud target detection; wherein, (a) is a target sheltering scene, (b) is a scene with uneven illumination, and (c) is a scene with a longer distance;
FIG. 12 shows the detection results of the laser radar in different scenarios (detection regression 3D box); wherein, (a) is a target shielding scene, (b) is a scene with uneven illumination, and (c) is a scene with a longer distance;
FIG. 13 shows the result of the point cloud and image detection fusion; wherein, (a) is an image detection result, and (b) is a fusion algorithm detection result.
The invention is further explained below with reference to the drawings and the description of embodiments.
Detailed Description
SLAM mapping and localization
1.1 graph optimization Algorithm
The Cartogrer algorithm belongs to a graph-optimized SLAM algorithm, and the front-end position and posture of the graph-optimized SLAM are adjusted through loop detection at the rear end, so that the position and posture of the robot closest to a true value are obtained at last. The graph optimization SLAM problem is decomposed into two tasks:
(1) Constructing a graph, wherein the poses of the robot are used as vertexes, and the relationship between the poses is used as an edge (accumulation of front-end-sensor information);
(2) Optimizing a graph, and adjusting the pose vertex of the robot to meet the constraint of an edge (back-end);
1.1.1Cartogrer Algorithm
The Cartographer algorithm is the open source algorithm proposed by google in 2016. The method can reduce the calculation amount, meet the mapping requirement of a large space, and optimize large-scale data in real time. The whole algorithm is divided into two parts: local SLAM (front-end matched), and Global SLAM (back-end closed loop). The invention improves the Local front end, and can improve the accuracy of the reconstructed map by matching with the back end closed loop link of the Cartographer algorithm.
The method has the disadvantages that when the environment structure is similar, the collected point cloud data frames have similar repetition conditions, and when the method is used for rear-end closed loop detection, closed loop cannot be formed, so that the Map building fails.
Fig. 1 shows that the back-end detection cannot be closed due to the front-end matching error.
1.1.2 Point cloud registration
Therefore, on the basis of the Cartogrph algorithm, a novel laser radar point cloud registration method Map-to-Map is provided for point cloud registration. After some frames of point cloud data are obtained, a sub map (local sub map) is established, and after each local sub map is established, the sub map is matched with the currently and recently established sub map, so that the problem that the closed loop detection error at the back end is caused by the frame matching of the point cloud data with similar environmental structures is avoided. The specific process is shown in fig. 2.
2. Target detection
The method comprises the steps that joint calibration of the laser radar and the depth camera is needed before target detection, the joint calibration is the basis of data fusion, a conversion matrix of a camera coordinate system and a laser radar coordinate system is obtained through the joint calibration, data of the radar and the camera are fused, and obtained data information is more comprehensive and accurate. Fig. 6 is a combined calibration chart.
The invention discloses a three-dimensional map reconstruction and target detection method of a coal mine underground inspection robot, which comprises the following steps:
step 1, selecting a map construction scene, collecting and processing point cloud data of the selected scene, and acquiring a point cloud data set containing selected scene information: collecting laser point clouds by a 16-line laser radar (the model is robsense) by using a reserved instruction (ross bag record + topic name), and generating a bag file containing selected scene information; then, carrying out noise reduction filtering processing (statistical filtering algorithm) on the bag file to obtain a processed bag file;
step 2, carrying out point cloud registration on the point cloud data of the scene obtained in the step 1 by adopting a Map-to-Map method to obtain an established three-dimensional Map; as shown in fig. 2, the specific operation is as follows:
step 21, acquiring continuous n frames of point cloud data from the point cloud data set acquired in the step 1, establishing a local sub-map for the n frames of point cloud data, and taking the local sub-map as the current latest local sub-map;
step 22, acquiring continuous n frames of point cloud data again from the remaining point cloud data in the point cloud data set, and establishing a local subgraph Submap for the n frames of point cloud data;
step 23, performing loop detection in a backend process by using a global mode in a cartographer algorithm, and if a loop occurs, executing step 24; otherwise, abandoning the local subgraph established in the step 22 and returning to the step 22;
the back end refers to: the cartographer algorithm builds a graph and optimizes the operation part; the occurrence of the loop means that the similarity between the local subgraph established in the step 22 and the current latest local subgraph is greater than a certain weight, and the weight can be set by self;
step 24, matching the local subgraph established in the step 22 with the current latest local subgraph to obtain a matched local subgraph, and updating the matched local subgraph to the current latest local subgraph; the matching refers to overlapping the two local subgraphs (namely, discarding redundant point cloud frame data in the two local subgraphs and splicing the two local subgraphs after discarding the redundant data to obtain a new local subgraph);
in the matching process of the two local sub-images, the pose conversion formula when the point cloud data of the local sub-image is inserted into the current latest local sub-image established in step 22 is as follows:
Figure BDA0003790595860000071
wherein the content of the first and second substances,
Figure BDA0003790595860000072
representing the pose of each point cloud in a point cloud dataset containing scene information,
Figure BDA0003790595860000073
the translation of the point cloud in the x, y directions,
Figure BDA0003790595860000074
is the amount of rotation of the plane;
Figure BDA0003790595860000075
the pose of the point cloud in the current local subgraph is obtained; p is the probability value of the existence of the obstacle and can be set by self.
And 25, returning to execute the step 22 until all the point cloud data in the point cloud data set obtained in the step 1 are obtained, and obtaining the current latest local sub-image which is the required three-dimensional map.
The point cloud registration is carried out by using the Map-to-Map method in the step 2, so that a three-dimensional Map is constructed, and the existing method can be avoided: and when one frame is obtained and contains the selected scene information data, the selected scene information data is put into the established subgraph, so that the point cloud data with similar environmental structures are subjected to frame matching, and the back-end closed-loop detection is wrong.
Step 3, positioning the routing inspection route of the routing inspection robot in real time according to the three-dimensional map generated in the step 2; the specific operation is as follows:
step 31, adopting an IMU inertial sensor to connect a computer to acquire pose information; outputting the csv file information through the instruction;
step 32, selecting a coordinate system of the IMU inertial sensor as a positioning operation coordinate system;
and step 33, positioning the traveling route of the inspection robot by using the csv file generated in the step 31 and the three-dimensional map generated in the step 2 and matching with a positioning part source code (inputting the absolute path of the file in the source code and modifying the parameters of the configuration file) by a Cartogrer-SLAM algorithm. The result of fig. 5 is obtained (the solid line in the figure is the locus of the positioning operation).
Step 4, in the process of advancing of the inspection robot, performing target detection by using a fusion algorithm, wherein the specific operation is as follows:
and step 41, performing noise reduction processing on the point cloud data acquired in the step 1 through statistical filtering, and outputting noise-reduced point cloud information.
Step 42, collecting image information of the selected scene in the step 1 by using a Realsense D435i camera, inputting the image information into a fusion algorithm CenterNet network for target detection, and outputting a target central point position and a target type C, wherein the type is known by a KIITI data set;
and 43, inputting the point cloud data output in the step 41 into a CenterPiont network fusion algorithm for target detection, outputting a target 3D detection frame, a distance and a type C, and extracting a central point of the 3D target detection frame.
Specifically, as shown in fig. 6, the inspection robot is 95cm high, and the chassis is 25cm high. Laser radar and camera are all installed on patrolling and examining the robot, and the camera is located laser radar under, and IMU inertial sensor installs on patrolling and examining the chassis of robot.
And 5, respectively outputting the image target central point position and the point cloud target central point position in the step 4, wherein in the central points, various detection conditions exist, including a target which is correctly detected and tracked, an error detection target and the like. And (3) carrying out optimal matching on the target detection result obtained in the step (4) by utilizing the improved KNN algorithm, realizing classification of the detection result, and preferentially outputting the detection result as an obstacle target, wherein the method specifically comprises the following steps:
and step 51, respectively selecting the target center point position obtained in the step 42 and the target 3D detection frame center point position obtained in the step 43 to be matched into point pairs one by one to be used as an improved KNN algorithm input, and obtaining n multiplied by n point pairs in total. Wherein the position of the center point of the target obtained in step 42 is represented as
Figure BDA0003790595860000081
The position of the center point of the target 3D detection frame obtained in step 43 is represented as
Figure BDA0003790595860000082
n is the number of the selected central points;
step 52, calculating the Euclidean distance between two central points in each point pair selected in the step 51;
step 53 of selecting the Euclidean distances of all the point pairs obtained in step 52
Figure BDA0003790595860000083
And taking the point pair with the minimum Euclidean distance of the corresponding n point pairs as an optimal point pair to obtain n optimal point pairs, and fusing each optimal point pair, wherein the fusing refers to merging two points in the point pair into one point (optionally taking one point as the central point position of the point pair) to be used as the detected obstacle target to obtain n obstacle targets.
In the process of fusing the center points, the current frame image target point and the target detection frame center point in the point cloud realize data fusion, the black point is a target detection center point set in the point cloud data, the white point is the center point of the image detection result (see fig. 9), the two are fused into the same target center point through the closest distance matching in the step 6, and the connection relationship is shown in fig. 8.
To verify the feasibility and effectiveness of the invention, the following tests were performed:
SLAM mapping and localization
In order to prove that the improved Cartogrer algorithm has a better mapping effect under the condition of similar environmental structures, the experimental site selects and simulates the underground of a coal mine to acquire laser point cloud data, meanwhile, the map is constructed by using the algorithm before and after improvement, and the mapping time efficiency, the mapping effect and the mapping precision are compared (the mapping operation is carried out by using the collected and processed bag file, and the mapping efficiency is shown in figure 4), so that the algorithm after improvement is superior to the algorithm before improvement.
(1) Drawing effect comparison diagram:
FIG. 3 is a comparison of the effect of the improved pre-and post-image creation:
before improvement: under the condition of similar environment structures, loop detection errors exist, so that the map construction precision is not high;
after the improvement: for the conditions of complex and similar environment structures, loop detection can still be carried out, and the effect of closed loop is achieved
(2) Comparison of mapping efficiency
The occupancy rates (under the same server) of the CPU in the mapping process before and after the algorithm improvement are compared, so that the mapping efficiency before and after the algorithm improvement is obtained.
The graph building experiment is carried out by adopting the four-core processor, and as can be seen from fig. 4, the occupancy rate of the CPU before the improvement is higher than that of the CPU after the improvement, and the occupancy rate of the CPU after the improvement is lower and tends to be stable along with the increase of time. Therefore, under the same experimental environment, the improved Cartogrer algorithm is more suitable for mobile robot mapping than before the improvement.
(3) Back end closed loop positioning
The method is characterized in that a new laser radar point cloud registration method Map-to-Map is provided. After some frames of point cloud data are obtained, a sub map (local subgraph) is established, point cloud registration is carried out, and redundant data are removed. And transmitting to the rear end for closed loop detection and positioning. Fig. 5 shows the positioning effect of the rear closed loop detection under RVIZ:
since the operation of the above steps is improved in the aspect of point cloud matching, the effect of the obtained three-dimensional map can achieve a better effect than that of the obtained three-dimensional map before, and the three-dimensional map can meet the requirement of precision as the input of positioning operation, so that the effect is better as seen from the RVIZ software (the trajectory line depicted in the figure).
2. Target detection
(1) After training and processing of the KITTI data set, selecting 3 parts of data of different vehicle driving scenes for testing, marking the category and the position of a target, wherein the image detection test effect is shown in FIG. 9;
(2) the detection of the target of the CenterNet image depending on the camera has great limitation, the phenomenon of obvious missing detection exists in a scene where the target is shielded and a shielded vehicle under a scene with uneven illumination in the graph 10, the detection result of the camera is influenced by the illumination condition, so that the measurement precision is greatly reduced, a large amount of target missing detection is caused, and the test result shows that the target detection based on the camera hardly meets the requirement of an open-air automatic driving mine card.
(3) Fig. 11 (a), (b), and (c) are the detection results of the lidar in different scenes, and it can be known from the detection block in fig. 11 (a) that the image detection does not detect the target, and the lidar detects the target. As can be known from fig. 11 (b), the target position detected by the laser radar is deviated from the actual target position, and since the point cloud data is in discrete distribution in form, the positions and intervals of the data points are irregularly distributed in the three-dimensional space, the points of the same type are difficult to select, and the target position is not accurate. As shown in fig. 11 (b), since the point cloud data itself is irregular, the positions of the red detection frame and the green detection frame are deviated in different scenes. To acquire more accurate target position information, the detection result of the image is fused with the point cloud detection result to solve the problem.
(4) In different test scenes, image detection has obvious missing detection, and in point cloud target detection, a remote and fuzzy target can be effectively positioned and captured to a position frame, as shown in fig. 12 (b), compared with a single image detection method, the number of the shielded vehicle detections is increased by 7 by the fusion algorithm, and obviously, under the conditions of shielding, uneven illumination and the like, the missing detection rate is effectively reduced by the fusion algorithm.
And (4) conclusion:
(1) Establishing a map and positioning: the Graph-slam (Graph optimization) Graph building algorithm, namely a Cartogrrapher algorithm, is researched, a point cloud registration method Scan to Map (frame and local subgraph) method of the algorithm is analyzed, and for the condition that the environment structure is similar, loop detection errors exist, so that the Map building precision is not high, meanwhile, redundant data information exists between frame data, and the Graph building efficiency is low. Aiming at the problems, a new laser radar point cloud registration method, map to Map (local subgraph and local subgraph) registration method, is provided on the basis of the algorithm, point cloud data is obtained to construct a local subgraph, and registration is carried out between the local subgraph and the local subgraph, so that loop detection is carried out. The matching degree of the map is improved, and meanwhile, redundant information among data frames is removed, so that the map precision and the map building efficiency are improved. The method is more suitable for the map building of the mobile robot.
(2) And (3) obstacle detection: the invention provides a radar and camera decision layer fusion method, which comprises the steps of utilizing a 16-line laser radar and a Realsense D435 depth camera to obtain point cloud data, jointly calibrating the laser radar and the camera, projecting the point cloud data and image data to the same coordinate system, returning obstacles in an image to a target center point (including depth, size and category) by combining a target detection algorithm, and returning the obstacles in the point cloud to a target center point set. And fusing the target central points of the image and the point cloud data by using a KNN (K-Nearest Neighbor) method to realize optimal matching of the detection result, and finally outputting a fusion result with high reliability.

Claims (7)

1. A three-dimensional map reconstruction and target detection method for a coal mine underground inspection robot is characterized by comprising the following steps:
step 1, selecting a map construction scene, collecting and processing point cloud data of the selected scene, and acquiring a point cloud data set containing selected scene information;
step 2, carrying out point cloud registration on the point cloud data containing the scene information acquired in the step 1 by adopting a Map-to-Map method to obtain an established three-dimensional Map; the method specifically comprises the following substeps:
step 21, acquiring continuous n frames of point cloud data from the point cloud data set acquired in the step 1, establishing a local sub-map for the n frames of point cloud data, and taking the local sub-map as the current latest local sub-map;
step 22, acquiring continuous n frames of point cloud data from the remaining point cloud data in the point cloud data set again, and establishing a local subgraph Submap for the n frames of point cloud data;
step 23, performing loop detection in the backend process by using a global mode in the cartographer algorithm, and if a loop appears, executing step 24; otherwise, abandoning the local subgraph established in the step 22 and returning to the step 22;
step 24, matching the local subgraph established in the step 22 with the current latest local subgraph to obtain a matched local subgraph, and updating the matched local subgraph to the current latest local subgraph; wherein, matching refers to overlapping two local subgraphs;
step 25, returning to execute step 22 until all the point cloud data in the point cloud data set obtained in step 1 are obtained, and obtaining the current latest local sub-map as the required three-dimensional map;
step 3, positioning the routing inspection route of the routing inspection robot in real time according to the three-dimensional map generated in the step 2;
step 4, carrying out target detection by using a fusion algorithm in the process of the traveling of the inspection robot;
and 5, optimally matching the target detection results obtained in the step 4, realizing classification of the detection results, and preferentially outputting the detection results as the obstacle targets.
2. The coal mine underground inspection robot three-dimensional map reconstruction and target detection method according to claim 1, characterized in that in the step 1, a 16-line laser radar is used for collecting laser point clouds by using a reserved instruction, and bag files containing selected scene information are generated; and then, carrying out noise reduction and filtering processing on the bag file to obtain a processed bag file, namely the point cloud data set containing the selected scene information.
3. The coal mine underground inspection robot three-dimensional map reconstruction and target detection method according to claim 1, wherein in the step 23, the rear end refers to a cartographer algorithm map construction optimization operation part; the occurrence of the loop means that the similarity between the local sub-graph established in step 22 and the current latest local sub-graph is greater than a certain weight.
4. The coal mine underground inspection robot three-dimensional map reconstruction and target detection method according to claim 1, wherein in the step 24, the overlapping of the two local sub-graphs means discarding redundant point cloud frame data in the two local sub-graphs, and splicing the two local sub-graphs after discarding redundant data to obtain a new local sub-graph.
5. The coal mine underground inspection robot three-dimensional map reconstruction and target detection method according to claim 1, wherein the step 3 comprises the following substeps:
step 31, adopting an IMU inertial sensor to acquire pose information; outputting the csv file information through the command;
step 32, selecting a coordinate system of the IMU inertial sensor as a positioning operation coordinate system;
and step 33, performing the traveling route positioning operation of the inspection robot by using the csv file generated in the step 31 and the three-dimensional map generated in the step 2 and matching with a positioning part source code of a Cartogrer-SLAM algorithm.
6. The coal mine underground inspection robot three-dimensional map reconstruction and target detection method according to claim 1, wherein the step 4 comprises the following substeps:
step 41, performing noise reduction processing on the point cloud data acquired in the step 1 through statistical filtering, and outputting noise-reduced point cloud information;
step 42, collecting image information of the selected scene in the step 1 by using a Realsense D435i camera, inputting the image information into a fusion algorithm CenterNet network for target detection, and outputting a target central point position and a target type C, wherein the type is known by a KIITI data set;
and 43, inputting the point cloud data output in the step 41 into a CenterPiont network fusion algorithm for target detection, outputting a target 3D detection frame, a distance and a type C, and extracting a central point of the 3D target detection frame.
7. The coal mine underground inspection robot three-dimensional map reconstruction and target detection method according to claim 1, wherein the step 5 comprises the following substeps:
step 51, respectively selecting the target center point position obtained in step 42 and the target 3D detection frame center point position obtained in step 43 to be paired into point pairs one by one to be used as an improved KNN algorithm input, and obtaining n multiplied by n point pairs in total; wherein the position of the center point of the target obtained in step 42 is represented as
Figure FDA0003790595850000022
The position of the center point of the target 3D detection frame obtained in step 43 is represented as
Figure FDA0003790595850000021
n is the number of the selected central points;
step 52, calculating the Euclidean distance between two central points in each point pair selected in the step 51;
step 53, selecting P among the Euclidean distances of all the point pairs obtained in step 52 yi And the point pair with the minimum Euclidean distance of the corresponding n point pairs is used as an optimal point pair to obtain n optimal point pairs, and each optimal point pair is fused to be used as a detected obstacle target to obtain n obstacle targets.
CN202210954381.1A 2022-08-10 2022-08-10 Three-dimensional map reconstruction and target detection method for coal mine underground inspection robot Pending CN115439621A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210954381.1A CN115439621A (en) 2022-08-10 2022-08-10 Three-dimensional map reconstruction and target detection method for coal mine underground inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210954381.1A CN115439621A (en) 2022-08-10 2022-08-10 Three-dimensional map reconstruction and target detection method for coal mine underground inspection robot

Publications (1)

Publication Number Publication Date
CN115439621A true CN115439621A (en) 2022-12-06

Family

ID=84243534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210954381.1A Pending CN115439621A (en) 2022-08-10 2022-08-10 Three-dimensional map reconstruction and target detection method for coal mine underground inspection robot

Country Status (1)

Country Link
CN (1) CN115439621A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115877855A (en) * 2023-03-03 2023-03-31 天津滨电电力工程有限公司 Intelligent power inspection robot adaptive to environmental path planning and inspection method
CN116448115A (en) * 2023-04-07 2023-07-18 连云港杰瑞科创园管理有限公司 Unmanned ship probability distance map construction method based on navigation radar and photoelectricity

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115877855A (en) * 2023-03-03 2023-03-31 天津滨电电力工程有限公司 Intelligent power inspection robot adaptive to environmental path planning and inspection method
CN116448115A (en) * 2023-04-07 2023-07-18 连云港杰瑞科创园管理有限公司 Unmanned ship probability distance map construction method based on navigation radar and photoelectricity
CN116448115B (en) * 2023-04-07 2024-03-19 连云港杰瑞科创园管理有限公司 Unmanned ship probability distance map construction method based on navigation radar and photoelectricity

Similar Documents

Publication Publication Date Title
US11681746B2 (en) Structured prediction crosswalk generation
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
US11852729B2 (en) Ground intensity LIDAR localizer
Kim et al. Deep learning based vehicle position and orientation estimation via inverse perspective mapping image
WO2021022615A1 (en) Method for generating robot exploration path, and computer device and storage medium
CN107808123B (en) Image feasible region detection method, electronic device, storage medium and detection system
CN111060924B (en) SLAM and target tracking method
CN115439621A (en) Three-dimensional map reconstruction and target detection method for coal mine underground inspection robot
CN111680611B (en) Road trafficability detection method, system and equipment
CN110197173B (en) Road edge detection method based on binocular vision
WO2021021862A1 (en) Mapping and localization system for autonomous vehicles
US11703596B2 (en) Method and system for automatically processing point cloud based on reinforcement learning
CN113516664A (en) Visual SLAM method based on semantic segmentation dynamic points
CN114782729A (en) Real-time target detection method based on laser radar and vision fusion
Zhang et al. Lidar-guided stereo matching with a spatial consistency constraint
Wang et al. Automated joint 3D reconstruction and visual inspection for buildings using computer vision and transfer learning
Alidoost et al. Y-shaped convolutional neural network for 3d roof elements extraction to reconstruct building models from a single aerial image
Sun et al. Geographic, geometrical and semantic reconstruction of urban scene from high resolution oblique aerial images.
CN114494618A (en) Map generation method and device, electronic equipment and storage medium
Tian et al. Vision-based mapping of lane semantics and topology for intelligent vehicles
Huang et al. A coarse-to-fine LiDar-based SLAM with dynamic object removal in dense urban areas
WO2022021209A9 (en) Electronic map generation method and apparatus, computer device, and storage medium
CN111239761B (en) Method for indoor real-time establishment of two-dimensional map
Ma et al. Semantic geometric fusion multi-object tracking and lidar odometry in dynamic environment
Mattson et al. Reducing ego vehicle energy-use by LiDAR-based lane-level positioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination