CN116518984B - Vehicle road co-location system and method for underground coal mine auxiliary transportation robot - Google Patents

Vehicle road co-location system and method for underground coal mine auxiliary transportation robot Download PDF

Info

Publication number
CN116518984B
CN116518984B CN202310814856.1A CN202310814856A CN116518984B CN 116518984 B CN116518984 B CN 116518984B CN 202310814856 A CN202310814856 A CN 202310814856A CN 116518984 B CN116518984 B CN 116518984B
Authority
CN
China
Prior art keywords
robot
positioning
coordinate system
auxiliary
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310814856.1A
Other languages
Chinese (zh)
Other versions
CN116518984A (en
Inventor
李猛钢
张运通
胡琨
周公博
唐超权
胡而已
周坪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology CUMT
Original Assignee
China University of Mining and Technology CUMT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology CUMT filed Critical China University of Mining and Technology CUMT
Priority to CN202310814856.1A priority Critical patent/CN116518984B/en
Publication of CN116518984A publication Critical patent/CN116518984A/en
Application granted granted Critical
Publication of CN116518984B publication Critical patent/CN116518984B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a coal mine underground auxiliary transportation robot vehicle road cooperative positioning system and a method, which belong to the field of coal mine underground positioning and comprise the following steps: the system comprises a roadway end identification positioning device, a natural semantic landmark auxiliary positioning device, a robot multi-source information fusion navigation device and a mine auxiliary transportation monitoring platform; the roadway end identification positioning device and the natural semantic landmark auxiliary positioning device are respectively communicated with the robot multi-source information fusion navigation device and the mine auxiliary transportation monitoring platform and are used for transmitting positioning information in real time; the robot multi-source information navigation device is communicated with the mine auxiliary transportation monitoring platform and is used for completing positioning navigation visualization. According to the vehicle-road co-positioning system and method for the coal mine underground auxiliary transportation robot, the auxiliary transportation robot is positioned under an absolute coordinate system based on a vehicle-road co-mechanism, so that the positioning accuracy is effectively improved, and unmanned and autonomous coal mine auxiliary transportation robots are realized.

Description

Vehicle road co-location system and method for underground coal mine auxiliary transportation robot
Technical Field
The invention relates to the technical field of underground coal mine positioning, in particular to a vehicle-road co-positioning system and method for an underground coal mine auxiliary transportation robot.
Background
Coal mine auxiliary transportation refers to the sum of various transportation except coal, and mainly comprises: mine personnel, various equipment, gangue and the like, which are important components of coal production.
In recent years, china starts to greatly advance the automatic and intelligent construction of the coal mine auxiliary transportation robot, but due to the shielding of underground GPS signals, a common positioning mode such as a GPS matched inertial navigation method cannot be used, and the existing positioning mode such as a positioning technology based on a wireless sensor network has the defects of low positioning precision, high construction cost, incapability of continuous positioning and the like; due to the degradation problems of the environment and sensors existing in the underground coal mine, the emerging SLAM technology (instant positioning and map construction) cannot meet the long-term stable positioning requirement of the auxiliary transportation robot. It can be known that the underground positioning of the auxiliary transportation robot has become a key problem for restricting the intelligent construction and unmanned development of the coal mine.
Disclosure of Invention
In order to solve the problems, the invention provides the vehicle-road collaborative positioning system and the vehicle-road collaborative positioning method for the underground coal mine auxiliary transport robot, which are based on a vehicle-road collaborative mechanism, so that the real-time information interaction between the auxiliary transport robot and a roadway end is realized, the positioning precision of the auxiliary transport robot can be effectively improved, and unmanned and autonomous transport of the underground coal mine auxiliary transport robot is realized.
In order to achieve the above purpose, the invention provides a coal mine underground auxiliary transportation robot vehicle road co-location system, comprising:
the roadway end recognition positioning device is built at the roadway end and is used for collecting RGB color images, depth images and point cloud information of the auxiliary transportation robot and completing recognition and positioning of the robot in an absolute coordinate system according to the collected RGB color images, depth images and point cloud information of the auxiliary transportation robot based on six-degree-of-freedom pose estimation of a voting method;
the auxiliary positioning device of the natural semantic roadmap is mounted on the auxiliary transport robot and is used for realizing the positioning of the auxiliary transport robot under an absolute coordinate system based on a six-degree-of-freedom pose estimation method of corresponding points according to the RGB color map, the depth image and the point cloud information of the acquired natural semantic roadmap;
the robot multi-source information fusion navigation device is loaded on the auxiliary transportation robot and is used for respectively constructing a laser, a vision, an encoder/IMU (inertial measurement unit) dead reckoning odometer according to various sensors, and realizing real-time positioning, map building and autonomous navigation of the robot multi-source information fusion under an absolute coordinate system based on roadway end identification positioning information, natural semantic landmark auxiliary positioning information and the odometer;
The mine auxiliary transportation monitoring platform is used for constructing a roadway 3D point cloud model and dynamically updating the six-degree-of-freedom pose of the auxiliary transportation robot to finish monitoring and scheduling of the auxiliary transportation robot;
the roadway end identification positioning device and the natural semantic landmark auxiliary positioning device are respectively communicated with the mine auxiliary transportation monitoring platform through underground wired optical fiber ring networks and 5G/WiFi6, and are also respectively communicated with the robot multi-source information fusion navigation device based on the 5G/WiFi6 and the optical fibers, and the robot multi-source information fusion navigation device is communicated with the mine auxiliary transportation monitoring platform based on the 5G/WiFi 6.
Preferably, the roadway end recognition positioning device comprises a roadway end sensing unit for shooting and scanning the auxiliary transport robot, a roadway end edge computing unit connected with the roadway end sensing unit and a roadway end communication unit connected with the roadway end edge computing unit;
the roadway end sensing unit comprises an RGB-D camera and a laser radar sensor, and is used for shooting and scanning to obtain RGB color images, depth images and point cloud information of the auxiliary transport robot;
the roadway end edge computing unit comprises a roadway end GPU and a roadway end computing module connected with the roadway end GPU, and is used for processing RGB color images, depth images and point cloud information of the auxiliary transport robot, and positioning the auxiliary transport robot under an absolute coordinate system in real time based on a six-degree-of-freedom pose estimation algorithm of a voting method.
Preferably, the natural semantic landmark auxiliary positioning device comprises a natural semantic landmark, a robot end sensing unit for sensing the natural semantic landmark, a robot end computing unit and a robot end communication unit;
the underground equipment with the natural semantic landmark being the object coordinate system of the model calibrated is used for performing auxiliary positioning after being identified by the robot end sensing unit;
the robot end sensing unit comprises an RGB-D camera and a laser radar, and is used for shooting RGB color images, depth images and point cloud information for scanning natural semantic signposts;
the robot end computing unit comprises a robot end GPU and a robot end computing module connected with the robot end GPU, and is used for processing the natural semantic road sign RGB color image, the depth image and the point cloud information, and calculating the positioning information of the auxiliary transportation robot under an absolute coordinate system in real time based on a six-degree-of-freedom pose estimation algorithm of the corresponding point.
Preferably, the robot multi-source information fusion navigation device comprises a sensing unit, an operation unit, a communication unit and an execution unit;
the sensing unit comprises an external sensor for acquiring point cloud data, color images, depth images and infrared depth images of the surrounding environment of the auxiliary transport robot in real time and an internal sensor for acquiring state information of the auxiliary transport robot in real time, wherein the external sensor comprises a laser radar, a millimeter wave radar, an RGB-D camera and a depth infrared camera; the internal sensor is used for detecting the inertial sensor of acceleration, angular velocity and angular displacement of the auxiliary transport robot;
The operation unit comprises an operation GPU, a positioning and map building calculation module and an autonomous navigation calculation module, wherein the positioning and map building calculation module is connected with the operation GPU and used for processing sensor data of the sensing unit in real time, running a multi-source information fusion SLAM method based on factor map optimization and completing positioning and map building of the auxiliary transportation robot in real time; the autonomous navigation calculation module is used for completing bottom path planning according to the current position and the target point, completing track tracking and autonomous obstacle avoidance according to the kinematic model and the dynamic model, and calculating a corresponding motion control instruction;
and the execution unit comprises a servo motor and an execution controller and is used for executing corresponding actions according to the motion control instruction sent by the operation unit to complete the track tracking of the robot.
Preferably, the robot end communication unit, the roadway end communication unit and the robot multi-source information fusion navigation all comprise an internal communication module and an external communication module, wherein the internal communication module adopts optical fiber communication, and the external communication module adopts 5G communication;
the internal communication module is used for data transmission between the roadway end sensing unit and the roadway end edge computing unit, data transmission between the robot end sensing unit and the robot end computing unit, and data transmission between the natural semantic landmark auxiliary positioning device and the robot multi-source information fusion navigation module, and between the roadway end identification positioning device and the mine auxiliary transportation monitoring platform;
The external communication module is used for data transmission of the roadway end identification positioning device and the robot multi-source information fusion navigation module and transmission of positioning information between the natural semantic landmark auxiliary positioning device and the robot multi-source information fusion navigation module and the mine auxiliary transportation monitoring platform.
Preferably, the mine auxiliary transportation monitoring platform comprises a monitoring unit, an auxiliary transportation scheduling unit and a monitoring end communication unit;
the monitoring unit comprises a video monitoring module, a positioning visualization module and a scene model database, wherein the video monitoring module is used for carrying out real-time target identification based on an image identification network according to a video stream acquired by a camera at a roadway end and carrying out real-time monitoring on the auxiliary transportation robot; the positioning visualization module is used for summarizing the overall information of the roadway on the data stream layer surface, constructing a roadway 3D point cloud model and dynamically updating the six-degree-of-freedom pose of the auxiliary transport robot;
the auxiliary transportation scheduling unit is used for integrating the positioning information and the auxiliary transportation task requirements of the robot multi-source information and completing the monitoring and scheduling of the auxiliary transportation robot;
the monitoring end communication unit adopts optical fiber and WiFi communication and is used for receiving positioning information of the roadway end identification positioning device and the natural semantic landmark auxiliary positioning device and finishing information interaction with the robot multi-source information fusion navigation device.
A method for a coal mine underground auxiliary transportation robot vehicle road co-location system comprises the following steps:
s1, roadway end camera deployment and natural semantic landmark model calibration;
s2, identifying and positioning roadway ends;
s3, positioning the natural road sign in an auxiliary way;
s4, multi-source information fusion SLAM is carried out, and real-time positioning and mapping under an absolute coordinate system are completed;
s5, autonomous navigation.
Preferably, the step S1 specifically includes the following steps:
arranging a roadway end camera in an underground area which is easy to generate scene and sensor degradation in the robot positioning and mapping process, extracting and constructing an underground equipment model as a natural semantic landmark, and calibrating the natural semantic landmark model by using an object coordinate system calibration method;
the step S2 specifically comprises the following steps:
acquiring RGB color image depth image and point cloud information of an auxiliary transportation robot through a roadway end camera, calculating in real time to obtain positioning information under an absolute coordinate system of the robot according to an identification positioning method based on six-degree-of-freedom pose estimation by a voting method, and transmitting the positioning information to a positioning and mapping calculation module in real time;
the step S3 specifically comprises the following steps:
RGB color image, depth image and point cloud information of the natural semantic signpost are acquired by a robot end camera, position information of an auxiliary transportation robot under an absolute coordinate system is obtained according to an auxiliary positioning method based on the natural semantic signpost, and the positioning information is transmitted to a positioning and map building calculation module in real time;
The step S4 specifically comprises the following steps:
based on inter-frame constraint of radar point cloud information of a robot multisource information fusion navigation device sensing unit, constraint of IMU information in an inter-frame time period, inter-frame constraint of camera image information, loop detection constraint between two frames and global positioning constraint of a roadway end identification positioning device and global positioning constraint of a natural semantic landmark auxiliary positioning device, residual terms are respectively constructed as factors, and an objective function of a least square problem is constructed based on a factor graph optimization method as follows:
the least square problem is solved to complete multisource information fusion SLAM, and real-time positioning and mapping under an absolute coordinate system are completed;
wherein: the inter-frame constraint of Lei Dadian cloud information is that the point cloud is divided into surface characteristic points and line characteristic points by calculating the curvature of the radar point cloud, and the residual terms of the surface characteristic points between adjacent frames are constructedResidual error item of line characteristic point>The construction constraint is as follows:
the constraint of IMU information in the inter-frame time period is that on the basis of the state transfer equation of the IMU, an IMU construction residual term in k-1 frame time and k frame time interval is constructed, and the formula is shown as follows
in the formula 、/>、/>、/>、/>Error values respectively representing translation pre-integration, speed pre-integration, rotation pre-integration, acceleration bias and angular acceleration bias;
In the intervalIn, the speed pre-integral residual term formula is as follows:
wherein ,representing the acceleration of gravity>Representing the rotation matrix at time t>Representing accelerometer measurements in IMU coordinate system at time t,/->Represents the acceleration zero offset at time t +.>Representing the calculated acceleration white noise at time t, < >>Representing the speed at time t,/-, and>representing a time interval;
in the intervalIn, the translation pre-integral residual term formula is as follows:
wherein ,representing the position of the IMU coordinate system at the moment t under the world coordinate system;
in the intervalIn, the rotational pre-integral residual term formula is as follows:
wherein ,is the gyroscope value obtained under IMU coordinate system,/->Zero offset value of gyroscope under IMU coordinate system at t moment, < >>White noise of the gyroscope under an IMU coordinate system at the moment t;
the inter-frame constraint of the camera image information means that a residual error item is constructed by using the observation information of the K-th frame and the observation information obtained based on the observation model according to the current pose, and the residual error item is as follows:
wherein ,for the observation data at the current k moment, +.>Observation data obtained by an observation equation based on the current position X for the k time;
the loop constraint is a constraint relation established between two positions with similar observation information, and two frame point clouds with similar observation information are respectively set as and />By ∈K for source point cloud>Is +.>Cloud at the target point->Corresponding point(s) closest to the search for (i) and (ii) its Euclidean distance>After finding the correspondence, constructing a residual function as follows:
wherein ,and->The rotation matrix and the translation vector at the moment k are respectively;
the global positioning constraint of the roadway end recognition positioning device is that the pose measured value of the roadway end recognition positioning device is used as a factor, and a residual error term is constructed as follows:
in the formula ,is the rotation component of the absolute coordinate system from the relative coordinate system established in SLAM process to the roadway end identification positioning device, < >>Is the translational component of the absolute coordinate system from the relative coordinate system established in SLAM process to the roadway end identification positioning device, < >>Is the coordinate under the absolute coordinate system estimated by the laser odometer at the moment t.
The global positioning constraint of the natural semantic landmark auxiliary positioning device is that the pose measured value of the natural semantic landmark auxiliary positioning device is used as a factor to construct a residual term as follows:
in the formula ,is the rotation component of the relative coordinate system established in SLAM process to the absolute coordinate system of the natural semantic landmark auxiliary positioning device,>is the translational component of the relative coordinate system established in SLAM process to the absolute coordinate system of the natural semantic landmark auxiliary positioning device, >Coordinates under an absolute coordinate system estimated by the SLAM odometer at the moment t;
the step S5 specifically comprises the following steps:
the autonomous navigation calculation module generates a track library offline according to the sensor information of the sensing unit to simulate the track possibly passed by the auxiliary transportation robot in a period of time in the future, calculates the possibility of collision between the space points covered by the track and the track, selects an optimal path in real time, and calculates a corresponding motion control instruction according to the kinematic model and the dynamic model of the auxiliary transportation robot;
in step S5, real-time monitoring is performed through a mine auxiliary transportation monitoring platform: and 3D visualization of the roadway underground is completed through fusing the map constructed by SLAM with multi-source information by observing the transportation condition of the robot and whether an accident occurs in real time through video streaming pictures of cameras at the roadway end and the six-degree-of-freedom pose of the robot.
Preferably, the object coordinate system calibration method in step S1 refers to placing a visual tag around an object to be calibrated, performing data acquisition by surrounding the object to be calibrated through a camera, converting depth information of each frame into point cloud by using internal transformation after completely acquiring object information, and projecting each frame of point cloud to an initial frame to realize three-dimensional reconstruction of a natural semantic landmark, so as to obtain an object CAD model, and calibrating an external parameter of the initial frame camera coordinate system through a total station and a calibration plate, thereby completing calibration of the natural semantic landmark CAD model, namely, calibration of the object coordinate system;
The method for calibrating the external parameters of the initial frame camera coordinate system through the total station and the calibration plate specifically comprises the following steps: the method comprises the steps of reading characteristic point coordinates of a calibration plate by using a total station, obtaining a calibration plate image and a depth image by using a robot end camera, operating by using an OpenCV library corner extraction function, and obtaining three-dimensional coordinates of corners under a camera coordinate system by internal reference transformation of depth values of corresponding positions; measuring coordinates of the corner points under the world coordinate system by using the total station, and obtaining pose transformation of the camera coordinate system and the world coordinate system by matching corresponding characteristic points to finish external parameter calibration of the camera coordinate system;
the positioning method of the six-degree-of-freedom pose estimation based on the voting method in the step S2 is as follows: after calibration of a roadway camera under an absolute coordinate system is achieved by using a total station, a robot CAD model is obtained, semantic segmentation is carried out on RGB color images and point cloud information of the auxiliary transportation robot through a pre-trained semantic segmentation network, mask images and mask point clouds of the auxiliary transportation robot are extracted, depth values of corresponding positions in depth images of the auxiliary transportation robot are converted into point cloud data based on camera internal parameters according to the mask images; estimating the pose of the auxiliary transportation robot by utilizing a six-degree-of-freedom pose estimation deep learning network according to the segmented image and the point cloud information to obtain the six-degree-of-freedom pose of the auxiliary transportation robot with confidence, optimizing the pose with highest confidence, and finally obtaining the positioning information of the robot under an absolute coordinate system based on pose transformation of a robot object coordinate system and a camera coordinate system and pose transformation of the camera coordinate system and the absolute coordinate system;
The auxiliary positioning method based on the natural semantic signpost in the step S3 comprises the following steps: after the calibration of a natural semantic landmark object coordinate system is realized by using a total station, a natural semantic landmark CAD model is obtained, the characteristic extraction is carried out by calculating a point characteristic histogram of the natural semantic landmark CAD model through offline processing, and then a color image and point cloud of the natural semantic landmark of the current frame are segmented on line by using a pre-trained semantic segmentation network to obtain an image mask and a point cloud mask; according to the mask image obtained by segmentation, converting the depth value of the corresponding position in the depth image of the auxiliary transport robot into point cloud information based on camera internal parameters; extracting features of the natural semantic landmark point cloud information, primarily estimating the pose of the current frame by matching point pairs of the natural semantic landmark point cloud of the current frame and the features of the natural semantic landmark CAD model, performing pose optimization on the primarily estimated pose again by an ICP algorithm to obtain pose transformation of the robot relative to a natural semantic landmark object coordinate system, and finally obtaining the pose of the robot with six degrees of freedom under an absolute coordinate system by the pose transformation.
Preferably, in step S3, when the illumination of the semantic segmentation network under the coal mine fails to reach the set condition, noise is likely to be generated, so that the segmentation accuracy of the semantic segmentation network is reduced, and a false recognition phenomenon occurs, thereby affecting the positioning accuracy, so that the recognition network is utilized to preprocess the robot RGB image and the natural semantic landmark RGB image in the environment, and after acquiring the bounding box, the semantic segmentation is performed on the cut picture in the bounding box, thereby reducing the semantic segmentation error and improving the positioning accuracy.
The invention has the following beneficial effects:
1. under the coal mine, based on a vehicle-road coordination mechanism (the vehicle-road coordination is realized by adopting advanced wireless communication, new generation Internet and other technologies, vehicle-vehicle and vehicle-road dynamic real-time information interaction is realized in an omnibearing manner, vehicle active safety control and road coordination management are carried out on the basis of full-time empty dynamic traffic information acquisition and fusion), the positioning of an auxiliary transportation robot under an absolute coordinate system is realized through the auxiliary positioning of natural semantic road signs and the recognition positioning of roadway ends, the pose of the robot under the absolute coordinate system can be obtained through the real-time information fusion between the auxiliary transportation robot and the odometer of the robot end laser, vision, inertial navigation and the like, the positioning precision of the robot can be effectively improved, the robot positioning and an environment map are conveniently fused into a GIS system, and unmanned and autonomous transportation of the auxiliary transportation robot for the coal mine is realized.
2. Based on roadway identification positioning information and auxiliary positioning information based on natural semantic signposts, the multi-source information fusion SLAM based on factor graph optimization is low in construction cost, accumulated errors generated in the SLAM process can be eliminated, and accurate and stable positioning and underground coal mine map construction are achieved.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a block diagram of a coal mine underground auxiliary transport robot vehicle co-location system;
FIG. 2 is a flow chart of a method of a coal mine underground auxiliary transport robot vehicle co-location system of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings, and it should be noted that, while the present embodiment provides a detailed implementation and a specific operation process on the premise of the present technical solution, the protection scope of the present invention is not limited to the present embodiment.
As shown in fig. 1, a coal mine underground auxiliary transportation robot vehicle road co-location system includes:
the roadway end recognition positioning device is built at the roadway end and is used for collecting RGB color images, depth images and point cloud information of the auxiliary transportation robot and completing recognition and positioning of the robot in an absolute coordinate system according to the collected RGB color images, depth images and point cloud information of the auxiliary transportation robot based on six-degree-of-freedom pose estimation of a voting method;
the auxiliary positioning device of the natural semantic roadmap is mounted on the auxiliary transport robot and is used for realizing the positioning of the auxiliary transport robot under an absolute coordinate system based on a six-degree-of-freedom pose estimation method of corresponding points according to the RGB color map, the depth image and the point cloud information of the acquired natural semantic roadmap;
The robot multi-source information fusion navigation device is loaded on the auxiliary transportation robot and is used for respectively constructing a laser, a vision, an encoder/IMU (inertial measurement unit) dead reckoning odometer according to various sensors, and realizing real-time positioning, map building and autonomous navigation of the robot multi-source information fusion under an absolute coordinate system based on roadway end identification positioning information, natural semantic landmark auxiliary positioning information and the odometer;
the mine auxiliary transportation monitoring platform is used for constructing a roadway 3D point cloud model, dynamically updating the six-degree-of-freedom pose of the auxiliary transportation robot and completing monitoring and scheduling of the auxiliary transportation robot;
the roadway end identification positioning device and the natural semantic landmark auxiliary positioning device are respectively communicated with the mine auxiliary transportation monitoring platform through underground wired optical fiber ring networks and 5G/WiFi6, and are also respectively communicated with the robot multi-source information fusion navigation device based on the 5G/WiFi6 and the optical fibers, and the robot multi-source information fusion navigation device is communicated with the mine auxiliary transportation monitoring platform based on the 5G/WiFi 6.
Specifically, the roadway end recognition positioning device comprises a roadway end sensing unit for shooting and scanning the auxiliary transport robot, a roadway end edge computing unit connected with the roadway end sensing unit and a roadway end communication unit connected with the roadway end edge computing unit; the roadway end sensing unit comprises an RGB-D camera and a laser radar sensor, and is used for shooting and scanning to obtain RGB color images, depth images and point cloud information of the auxiliary transport robot; the roadway end edge computing unit comprises a roadway end GPU and a roadway end computing module connected with the roadway end GPU, and is used for processing RGB color images, depth images and point cloud information of the auxiliary transport robot, and positioning the auxiliary transport robot under an absolute coordinate system in real time based on a six-degree-of-freedom pose estimation algorithm of a voting method.
Preferably, the natural semantic landmark auxiliary positioning device comprises a natural semantic landmark, a robot end sensing unit for sensing the natural semantic landmark, a robot end computing unit and a robot end communication unit; the underground equipment with the natural semantic landmark being the object coordinate system of the model calibrated is used for performing auxiliary positioning after being identified by the robot end sensing unit; the robot end sensing unit comprises an RGB-D camera and a laser radar, and is used for shooting RGB color images, depth images and point cloud information for scanning natural semantic signposts; the robot end computing unit comprises a robot end GPU and a robot end computing module connected with the robot end GPU, and is used for processing the natural semantic road sign RGB color image, the depth image and the point cloud information, and calculating the positioning information of the auxiliary transportation robot under an absolute coordinate system in real time based on a six-degree-of-freedom pose estimation algorithm of the corresponding point.
The natural semantic road sign in this embodiment is downhole equipment (downhole natural features or roadway mechanical equipment which have marked features of a downhole object coordinate system, are fixed in position and are not easy to damage, including but not limited to a ventilator, a damper, power supply equipment, a hydraulic support and the like, and can be identified and assisted by a robot end computing and communication unit).
Preferably, the robot multi-source information fusion navigation device comprises a sensing unit, an operation unit, a communication unit and an execution unit; the sensing unit comprises an external sensor for acquiring point cloud data, color images, depth images and infrared depth images of the surrounding environment of the auxiliary transport robot in real time and an internal sensor for acquiring state information of the auxiliary transport robot in real time, wherein the external sensor comprises a laser radar, a millimeter wave radar, an RGB-D camera and a depth infrared camera; the internal sensor is used for detecting the inertial sensor of acceleration, angular velocity and angular displacement of the auxiliary transport robot; the operation unit comprises an operation GPU, a positioning and map building calculation module and an autonomous navigation calculation module, wherein the positioning and map building calculation module is connected with the operation GPU and used for processing sensor data of the sensing unit in real time, running a multi-source information fusion SLAM method based on factor map optimization and completing positioning and map building of the auxiliary transportation robot in real time; the autonomous navigation calculation module is used for completing bottom path planning according to the current position and the target point, completing track tracking and autonomous obstacle avoidance according to the kinematic model and the dynamic model, and calculating a corresponding motion control instruction; and the execution unit comprises a servo motor and an execution controller and is used for executing corresponding actions (such as actions of devices such as a control brake, an accelerator, a steering mechanism, a hydraulic oil cylinder, an electric push rod, a pump, a valve and the like) according to the motion control instruction sent by the operation unit so as to finish the track tracking of the robot.
Preferably, the robot end communication unit, the roadway end communication unit and the robot multi-source information fusion navigation device communication unit all comprise an internal communication module and an external communication module, wherein the internal communication module adopts optical fiber communication, and the external communication module adopts 5G communication; the internal communication module is used for data transmission between the roadway end sensing unit and the roadway end edge computing unit, data transmission between the robot end sensing unit and the robot end computing unit, and data transmission between the natural semantic landmark auxiliary positioning device and the robot multi-source information fusion navigation module, and between the roadway end identification positioning device and the mine auxiliary transportation monitoring platform; the external communication module is used for data transmission of the roadway end identification positioning device and the robot multi-source information fusion navigation module and transmission of positioning information between the natural semantic landmark auxiliary positioning device and the robot multi-source information fusion navigation module and the mine auxiliary transportation monitoring platform.
Preferably, the mine auxiliary transportation monitoring platform comprises a monitoring unit, an auxiliary transportation scheduling unit and a monitoring end communication unit; the monitoring unit comprises a video monitoring module, a positioning visualization module and a scene model database, wherein the video monitoring module is used for carrying out real-time target identification based on an image identification network according to a video stream acquired by a camera at a roadway end and carrying out real-time monitoring on the auxiliary transportation robot; the positioning visualization module is used for summarizing the whole information of the roadway on the data flow layer surface according to the monitoring information of the roadway identification positioning system, the monitoring information of the robot end operation module of the natural road sign auxiliary positioning system and the perception-positioning-navigation-control information of the robot end, constructing a roadway 3D point cloud model and dynamically updating the six-degree-of-freedom pose of the auxiliary transportation robot; the scene model database comprises data such as grids, characteristics, semantic multiscale maps and the like which are generated and uploaded by a positioning and mapping calculation module and a fixed scene updating and edge calculating device for executing the multi-mode information fusion SLAM, so that the dynamic reconstruction and the real-time updating of a working scene roadway model and an equipment model are realized; the auxiliary transportation scheduling unit is used for completing the monitoring and scheduling of the auxiliary transportation robot according to the multi-source information fusion positioning information of the robot and the requirements of the auxiliary transportation tasks; the monitoring end communication unit adopts WiFi communication and is used for receiving positioning information of the roadway end identification positioning device and the natural semantic landmark auxiliary positioning device and finishing information interaction with the robot multi-source information fusion navigation device.
As shown in fig. 2, a method for a coal mine underground auxiliary transportation robot vehicle road co-location system comprises the following steps:
s1, roadway end camera deployment and natural semantic landmark model calibration;
preferably, the step S1 specifically includes the following steps:
arranging a roadway end camera in an underground area which is easy to generate scene and sensor degradation in the robot positioning and mapping process, extracting and constructing an underground equipment model as a natural semantic landmark, and calibrating the natural semantic landmark model by using an object coordinate system calibration method;
in the embodiment, in an underground area (a roadway end camera is deployed and an underground fixed object is extracted and constructed to serve as a model to serve as a natural semantic landmark, an underground natural feature or roadway mechanical equipment model which is obvious in characteristics such as a junction of a roadway, a ventilator, an air door and power supply equipment and the like, fixed in position and not easy to damage is constructed to serve as the natural semantic landmark) where scene and sensor degradation easily occur in the robot positioning and mapping process, an object coordinate system calibration method is utilized to calibrate the natural semantic landmark model;
preferably, the object coordinate system calibration method in step S1 refers to placing visual tags (such as two-dimensional codes, solving the pose relative to the initial frame by using the two-dimensional code tag position of a single frame) around an object to be calibrated, performing data acquisition by surrounding the object to be calibrated by using a robot end camera, converting depth information of each frame into point clouds by using internal parameter transformation after the object information is completely acquired, and projecting the point clouds of each frame to the initial frame to realize three-dimensional reconstruction of natural semantic signposts, so as to obtain an object CAD model, and calibrating the external parameters of the camera coordinate system of the initial frame by using a total station and a calibration plate, thereby completing calibration of the natural semantic signpost CAD model, namely calibration of the object coordinate system;
The method for calibrating the external parameters of the initial frame camera coordinate system through the total station and the calibration plate specifically comprises the following steps: reading characteristic point coordinates of calibration plate by using total stationAfter a calibration plate image and a depth image are obtained by using a robot camera, an OpenCV library corner extraction function is adopted to operate, and the coordinate of a feature point in a camera coordinate system is obtained>Obtaining three-dimensional coordinates ++of corner points under camera coordinate system through internal reference transformation of depth values of corresponding positions>The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the corner points under the world coordinate system are measured by using the total station, and the coordinates are obtained by matching corresponding characteristic points:
wherein R is a rotation matrix from a camera coordinate system to a world coordinate system, t is a translation vector from the camera coordinate system to the world coordinate system, R, t is obtained by solving by using a nonlinear optimization method, so that pose transformation of the camera coordinate system and the world coordinate system is obtained, and external parameter calibration of the camera coordinate system is completed;
s2, identifying and positioning roadway ends;
the step S2 specifically comprises the following steps:
acquiring RGB color image depth images and point cloud information of the auxiliary transportation robot through a roadway end camera, calculating in real time to obtain positioning information under an absolute coordinate system of the robot according to an identification positioning method based on six-degree-of-freedom pose estimation by a voting method, and transmitting the positioning information to a positioning and mapping calculation module in real time;
The positioning method of the six-degree-of-freedom pose estimation based on the voting method in the step S2 is as follows: calibrating a roadway camera under an absolute coordinate system by using a total station to obtain a robot CAD model, performing semantic segmentation on RGB color images and point cloud information of an auxiliary transportation robot through a pre-trained deep V < 3+ > network in a well-lighted area, and extracting segmentation images and mask point clouds of the auxiliary transportation robot; preprocessing an RGB color image of the auxiliary transportation robot through a dark_ros network in the area with the poor illumination, extracting a bounding box, and carrying out semantic segmentation processing on an image in the bounding box to obtain a mask image; converting the depth value of the corresponding position in the depth image of the auxiliary transportation robot into point cloud data based on camera internal parameters according to the mask image; encoding the color information of the segmented image and the geometric information of the point cloud data (in the embodiment, encoding the segmented image and the partial point cloud by using a convolutional neural network and a PointNet), performing encoding information fusion according to the corresponding relation between the point cloud voxels and the image pixels to obtain fusion characteristics, estimating the pose of the auxiliary transportation robot by using a six-degree-of-freedom pose estimation deep learning network according to the fusion characteristics to obtain the six-degree-of-freedom pose of the auxiliary transportation robot with confidence, optimizing the pose with the highest position confidence, and finally obtaining the positioning information of the robot under an absolute coordinate system based on pose transformation of a robot object coordinate system and a camera coordinate system and pose transformation of the camera coordinate system and the absolute coordinate system;
S3, positioning the natural road sign in an auxiliary way;
the step S3 specifically comprises the following steps:
RGB color image, depth image and point cloud information of the natural semantic signpost are acquired by a robot end camera, position information of an auxiliary transportation robot under an absolute coordinate system is obtained according to an auxiliary positioning method based on the natural semantic signpost, and the positioning information is transmitted to a positioning and map building calculation module in real time;
in this embodiment, the well-illuminated area performs semantic segmentation on the natural semantic landmark RGB image by using a pretrained deep v3+ network to obtain a segmentation mask; the area with poor illumination carries out target recognition on the natural semantic landmark RGB color image through a dark network_ros network, extracts a natural semantic landmark RGB image bounding box, carries out semantic segmentation processing on the image in the bounding box, and obtains a mask image; the position of the natural semantic signpost in the RGB image can be obtained through the segmentation mask, depth information is converted into 3D point clouds through internal parameter transformation by utilizing the depth value of the position corresponding to the segmentation mask, so that the point clouds of the natural semantic signpost under a camera coordinate system are obtained, the point characteristic histograms of the point clouds of the current frame and the point clouds of the CAD model of the natural semantic signpost are solved, point pairs similar to the point characteristic histograms are matched (in the embodiment, the SAC-IA algorithm is utilized to register the matched point pairs, pose transformation of the point clouds of the current frame and the CAD model is solved, rough point cloud registration is realized), the point clouds of the characteristic points of the current frame and the CAD model of the natural semantic signpost are utilized to carry out point cloud matching through an iterative nearest point algorithm, and the position information of an auxiliary transportation robot under an absolute coordinate system is obtained according to an auxiliary positioning method based on the natural semantic signpost, and the positioning information is transmitted to a positioning and map construction calculation module in real time;
In step S3, when the illumination of the semantic segmentation network in the underground coal mine fails to reach the set condition, noise is likely to be generated, so that the segmentation accuracy of the semantic segmentation network is reduced, and a false recognition phenomenon occurs, thereby affecting the positioning accuracy.
The iterative closest point algorithm (Iterative Closest Point, ICP) is specifically: obtaining pose transformation initial values based on the corresponding point pairs and the point cloud rough registration by solving the corresponding point pairs between the current frame point cloud and the CAD model point cloud, constructing a rotation translation matrix, transforming the current frame cloud under the coordinate system of the CAD model point cloud, namely under the natural semantic landmark object coordinate system, estimating an error function of the transformed current frame point cloud and the CAD model point cloud, and if the error function value is larger than a threshold value, iterating the operation until the given error requirement is met.
The auxiliary positioning method based on the natural semantic signpost in the step S3 comprises the following steps: firstly, performing off-line processing, calculating a robot CAD model point feature histogram to perform feature extraction, secondly performing feature extraction on a current frame robot point cloud on line, performing preliminary estimation on the current frame pose by matching point pairs with the similarity of the current frame point cloud and the robot CAD model feature, performing pose optimization on the preliminary estimated pose by an ICP algorithm again to obtain a natural semantic landmark CAD model, further obtaining pose transformation of the robot relative to a natural semantic landmark object coordinate system, and finally obtaining six-degree-of-freedom pose of the robot under an absolute coordinate system by the pose transformation;
S4, multi-source information fusion SLAM is carried out, and real-time positioning and mapping under an absolute coordinate system are completed;
the step S4 specifically comprises the following steps:
based on inter-frame constraint of radar point cloud information of a robot multisource information fusion navigation device sensing unit, constraint of IMU information in an inter-frame time period, inter-frame constraint of camera image information, loop detection constraint between two frames and global positioning constraint of a roadway end identification positioning device and global positioning constraint of a natural semantic landmark auxiliary positioning device, residual terms are respectively constructed as factors, and an objective function of a least square problem is constructed based on a factor graph optimization method as follows:
the least square problem is solved to complete multisource information fusion SLAM, and real-time positioning and mapping under an absolute coordinate system are completed;
wherein: the inter-frame constraint of Lei Dadian cloud information is that the point cloud is divided into surface characteristic points and line characteristic points by calculating the curvature of the radar point cloud, and the residual terms of the surface characteristic points between adjacent frames are constructedResidual error item of line characteristic point>The construction constraint is as follows:
the constraint of IMU information in the inter-frame time period is that on the basis of the state transfer equation of the IMU, an IMU construction residual term in k-1 frame time and k frame time interval is constructed, and the formula is shown as follows
in the formula 、/>、/>、/>、/>Error values respectively representing translation pre-integration, speed pre-integration, rotation pre-integration, acceleration bias and angular acceleration bias;
the inter-frame constraint of the camera image information means that a residual error item is constructed by using the observation information of the K-th frame and the observation information obtained based on the observation model according to the current pose, and the residual error item is as follows:
the loop constraint is a constraint relation established between two positions with similar observation information, and two frame point clouds with similar observation information are respectively set as and />By ∈K for source point cloud>Is +.>Cloud at the target point->Corresponding point(s) closest to the search for (i) and (ii) its Euclidean distance>After finding the correspondence, constructing a residual function as follows:
the global positioning constraint of the roadway end recognition positioning device is that the pose measured value of the roadway end recognition positioning device is used as a factor, and a residual error term is constructed as follows:
in the formula ,is the rotation component of the absolute coordinate system from the relative coordinate system established in SLAM process to the roadway end identification positioning device, < >>Is the translational component of the absolute coordinate system from the relative coordinate system established in SLAM process to the roadway end identification positioning device, < >>Is the coordinate under the absolute coordinate system estimated by the laser odometer at the moment t.
The global positioning constraint of the natural semantic landmark auxiliary positioning device is that the pose measured value of the natural semantic landmark auxiliary positioning device is used as a factor to construct a residual term as follows:
;/>
in the formula ,is the rotation component of the relative coordinate system established in SLAM process to the absolute coordinate system of the natural semantic landmark auxiliary positioning device,>is the translational component of the relative coordinate system established in SLAM process to the absolute coordinate system of the natural semantic landmark auxiliary positioning device,>is the coordinates in the absolute coordinate system estimated by the SLAM odometer at time t.
S5, autonomous navigation.
The step S5 specifically comprises the following steps:
the autonomous navigation calculation module generates a track library offline according to the sensor information of the sensing unit to simulate the track possibly passed by the auxiliary transportation robot in a period of time in the future, calculates the possibility of collision between the space points covered by the track and the track, selects an optimal path in real time, and calculates a corresponding motion control instruction according to the kinematic model and the dynamic model of the auxiliary transportation robot;
in the motion process of the auxiliary transportation robot, the laser radar and the IMU respectively acquire the surrounding environment point cloud and the motion state information of the laser radar point cloud, and the IMU is utilized to complete motion compensation of the laser radar point cloud; (because the matching calculation cost is too high by directly using the original point cloud, the real-time pair of the system cannot be ensured, the characteristic extraction method is adopted, the environmental characteristics can be comprehensively reflected by using fewer edge points and plane points, the use of all point clouds in calculation is avoided, the noise points of the characteristic points are eliminated as much as possible before the characteristic points are selected for each scanning division subarea, the accuracy and the stability of the selected characteristic points are ensured.) the points which can most express the geometric characteristics of edges, planes and the like in the environment are screened out by calculating curvature values, the pose is updated in real time based on an ICP algorithm according to the frame matching at adjacent moments, the construction of the laser odometer is completed, the laser odometer position information, the IMU position information, the roadway end identification positioning information, the positioning information based on natural semantic landmarks and the loop detection information are taken as factors, and the auxiliary transportation robot positioning and map construction result is optimized by using a method based on factor map optimization, and the multi-source information fusion SLAM system is constructed. Completing path planning, path tracking and autonomous obstacle avoidance according to the current position and the target point, realizing autonomous navigation of the auxiliary transportation robot, and completing an auxiliary transportation task;
In step S5, real-time monitoring is performed through a mine auxiliary transportation monitoring platform: and 3D visualization of the roadway underground is completed through fusing the map constructed by SLAM with multi-source information by observing the transportation condition of the robot and whether an accident occurs in real time through video streaming pictures of cameras at the roadway end and the six-degree-of-freedom pose of the robot.
Therefore, the vehicle-road cooperative positioning system and the vehicle-road cooperative positioning method for the auxiliary transportation robot under the coal mine realize real-time information interaction between the auxiliary transportation robot and the roadway end based on the vehicle-road cooperative mechanism, can effectively improve the positioning precision of the auxiliary transportation robot, and realize unmanned and autonomous transportation of the auxiliary transportation robot under the coal mine.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention and not for limiting it, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that: the technical scheme of the invention can be modified or replaced by the same, and the modified technical scheme cannot deviate from the spirit and scope of the technical scheme of the invention.

Claims (8)

1. The utility model provides a colliery is auxiliary transportation robot car way co-location system in pit which characterized in that: comprising the following steps:
The roadway end recognition positioning device is built at the roadway end and is used for collecting RGB color images, depth images and point cloud information of the auxiliary transportation robot and completing recognition and positioning of the robot in an absolute coordinate system according to the collected RGB color images, depth images and point cloud information of the auxiliary transportation robot based on six-degree-of-freedom pose estimation of a voting method;
the auxiliary positioning device of the natural semantic roadmap is mounted on the auxiliary transport robot and is used for realizing the positioning of the auxiliary transport robot under an absolute coordinate system based on a six-degree-of-freedom pose estimation method of corresponding points according to the RGB color map depth image and the point cloud information of the acquired natural semantic roadmap;
the robot multi-source information fusion navigation device is loaded on the auxiliary transportation robot and is used for respectively constructing a laser, a vision, an encoder/IMU (inertial measurement unit) dead reckoning odometer according to various sensors, and realizing real-time positioning, map building and autonomous navigation of the robot multi-source information fusion under an absolute coordinate system based on roadway end identification positioning information, natural semantic landmark auxiliary positioning information and the odometer;
the mine auxiliary transportation monitoring platform is used for constructing a roadway 3D point cloud model and dynamically updating the six-degree-of-freedom pose of the auxiliary transportation robot to finish monitoring and scheduling of the auxiliary transportation robot;
The roadway end identification positioning device and the natural semantic landmark auxiliary positioning device are respectively communicated with the mine auxiliary transportation monitoring platform through underground wired optical fiber ring networks and 5G/WiFi6, and are also respectively communicated with the robot multi-source information fusion navigation device based on the 5G/WiFi6 and the optical fibers, and the robot multi-source information fusion navigation device is communicated with the mine auxiliary transportation monitoring platform based on the 5G/WiFi 6;
the coal mine underground auxiliary transportation robot vehicle-road co-positioning method is applied to the coal mine underground auxiliary transportation robot vehicle-road co-positioning system;
the method comprises the following steps:
s1, roadway end camera deployment and natural semantic landmark model calibration;
s2, identifying and positioning roadway ends;
s3, positioning the natural road sign in an auxiliary way;
s4, multi-source information fusion SLAM is carried out, and real-time positioning and mapping under an absolute coordinate system are completed;
s5, autonomous navigation;
the step S1 specifically comprises the following steps:
arranging a roadway end camera in an underground area which is easy to generate scene and sensor degradation in the robot positioning and mapping process, extracting and constructing an underground equipment model as a natural semantic landmark, and calibrating the natural semantic landmark model by using an object coordinate system calibration method;
The step S2 specifically comprises the following steps:
acquiring RGB color image depth image and point cloud information of an auxiliary transportation robot through a roadway end camera, calculating in real time to obtain positioning information under an absolute coordinate system of the robot according to an identification positioning method based on six-degree-of-freedom pose estimation by a voting method, and transmitting the positioning information to a positioning and mapping calculation module in real time;
the step S3 specifically comprises the following steps:
RGB color image, depth image and point cloud information of the natural semantic signpost are acquired by a robot end camera, position information of an auxiliary transportation robot under an absolute coordinate system is obtained according to an auxiliary positioning method based on the natural semantic signpost, and the positioning information is transmitted to a positioning and map building calculation module in real time;
the step S4 specifically comprises the following steps:
based on inter-frame constraint of radar point cloud information of a robot multisource information fusion navigation device sensing unit, constraint of IMU information in an inter-frame time period, inter-frame constraint of camera image information, loop detection constraint between two frames and global positioning constraint of a roadway end identification positioning device and global positioning constraint of a natural semantic landmark auxiliary positioning device, residual terms are respectively constructed as factors, and an objective function of a least square problem is constructed based on a factor graph optimization method as follows:
The least square problem is solved to complete multisource information fusion SLAM, and real-time positioning and mapping under an absolute coordinate system are completed;
wherein: the inter-frame constraint of Lei Dadian cloud information is that the point cloud is divided into surface characteristic points and line characteristic points by calculating the curvature of the radar point cloud, and the residual terms of the surface characteristic points between adjacent frames are constructedResidual error item of line characteristic point>The construction constraint is as follows:
the constraint of IMU information in the inter-frame time period is that on the basis of the state transfer equation of the IMU, an IMU construction residual term in k-1 frame time and k frame time interval is constructed, and the formula is shown as follows
in the formula 、/>、/>、/>、/>Error values respectively representing translation pre-integration, speed pre-integration, rotation pre-integration, acceleration bias and angular acceleration bias;
in the intervalIn, the speed pre-integral residual term formula is as follows:
wherein ,representing the acceleration of gravity>Representing the rotation matrix at time t>Representing accelerometer measurements in IMU coordinate system at time t,/->Represents the acceleration zero offset at time t +.>Representing the calculated acceleration white noise at time t, < >>Representing the speed at time t,/-, and>representing a time interval;
in the intervalIn, the translation pre-integral residual term formula is as follows:
wherein ,representing tThe IMU coordinate system is carved at the position under the world coordinate system;
In the intervalIn, the rotational pre-integral residual term formula is as follows:
wherein ,is the gyroscope value obtained under IMU coordinate system,/->Zero offset value of gyroscope under IMU coordinate system at t moment, < >>White noise of the gyroscope under an IMU coordinate system at the moment t;
the inter-frame constraint of the camera image information means that a residual error item is constructed by using the observation information of the K-th frame and the observation information obtained based on the observation model according to the current pose, and the residual error item is as follows:
wherein ,for the observation data at the current k moment, +.>Observation data obtained by an observation equation based on the current position X for the k time;
the loop constraint is a constraint relation established between two positions with similar observation information, and two frame point clouds with similar observation information are respectively set as and />By ∈K for source point cloud>Is +.>Cloud at the target point->Corresponding point(s) closest to the search for (i) and (ii) its Euclidean distance>After finding the correspondence, constructing a residual function as follows:
wherein ,and->The rotation matrix and the translation vector at the moment k are respectively;
the global positioning constraint of the roadway end recognition positioning device is that the pose measured value of the roadway end recognition positioning device is used as a factor, and a residual error term is constructed as follows:
in the formula ,is a device for identifying and positioning relative coordinate system established in SLAM process to roadway end Rotation component of absolute coordinate system of +.>Is the translational component of the absolute coordinate system from the relative coordinate system established in SLAM process to the roadway end identification positioning device, < >>The coordinate of the laser odometer at the moment t under the absolute coordinate system is estimated;
the global positioning constraint of the natural semantic landmark auxiliary positioning device is that the pose measured value of the natural semantic landmark auxiliary positioning device is used as a factor to construct a residual term as follows:
in the formula ,is the rotation component of the relative coordinate system established in SLAM process to the absolute coordinate system of the natural semantic landmark auxiliary positioning device,>is the translational component of the relative coordinate system established in SLAM process to the absolute coordinate system of the natural semantic landmark auxiliary positioning device,>coordinates under an absolute coordinate system estimated by the SLAM odometer at the moment t;
the step S5 specifically comprises the following steps:
the autonomous navigation calculation module generates a track library offline according to the sensor information of the sensing unit to simulate the track possibly passed by the auxiliary transportation robot in a period of time in the future, calculates the possibility of collision between the space points covered by the track and the track, selects an optimal path in real time, and calculates a corresponding motion control instruction according to the kinematic model and the dynamic model of the auxiliary transportation robot;
In step S5, real-time monitoring is performed through a mine auxiliary transportation monitoring platform: and 3D visualization of the roadway underground is completed through fusing the map constructed by SLAM with multi-source information by observing the transportation condition of the robot and whether an accident occurs in real time through video streaming pictures of cameras at the roadway end and the six-degree-of-freedom pose of the robot.
2. The coal mine underground auxiliary transportation robot road co-location system according to claim 1, wherein: the roadway end recognition positioning device comprises a roadway end sensing unit for shooting and scanning the auxiliary transport robot, a roadway end edge computing unit connected with the roadway end sensing unit and a roadway end communication unit;
the roadway end sensing unit comprises an RGB-D camera and a laser radar, and is used for shooting and scanning to obtain RGB color images, depth images and point cloud information of the auxiliary transportation robot;
the roadway end edge computing unit comprises a roadway end GPU and a roadway end computing module connected with the roadway end GPU, and is used for processing RGB color images, depth images and point cloud information of the auxiliary transport robot, and positioning the auxiliary transport robot under an absolute coordinate system in real time based on a six-degree-of-freedom pose estimation algorithm of a voting method.
3. The coal mine underground auxiliary transportation robot road co-location system according to claim 2, wherein: the natural semantic landmark auxiliary positioning device comprises a natural semantic landmark, a robot end sensing unit, a robot end computing unit and a robot end communication unit, wherein the robot end sensing unit is used for sensing the natural semantic landmark;
the underground equipment with the natural semantic landmark being the object coordinate system of the model calibrated is used for performing auxiliary positioning after being identified by the robot end sensing unit;
the robot end sensing unit comprises an RGB-D camera and a laser radar, and is used for shooting RGB color images, depth images and point cloud information for scanning natural semantic signposts;
the robot end computing unit comprises a robot end GPU and a robot end computing module connected with the robot end GPU, and is used for processing the natural semantic road sign RGB color image, the depth image and the point cloud information, and calculating the positioning information of the auxiliary transportation robot under an absolute coordinate system in real time based on a six-degree-of-freedom pose estimation algorithm of the corresponding point.
4. The coal mine underground auxiliary transportation robot road co-location system according to claim 1, wherein: the robot multi-source information fusion navigation device comprises a sensing unit, an operation unit, a communication unit and an execution unit;
The sensing unit comprises an external sensor for acquiring point cloud data, color images, depth images and infrared depth images of the surrounding environment of the auxiliary transport robot in real time and an internal sensor for acquiring state information of the auxiliary transport robot in real time, wherein the external sensor comprises a laser radar, a millimeter wave radar, an RGB-D camera and a depth infrared camera; the internal sensor is used for detecting the inertial sensor of acceleration, angular velocity and angular displacement of the auxiliary transport robot;
the operation unit comprises an operation GPU, a positioning and map building calculation module and an autonomous navigation calculation module, wherein the positioning and map building calculation module is connected with the operation GPU and used for processing sensor data of the sensing unit in real time, running a multi-source information fusion SLAM method based on factor map optimization and completing positioning and map building of the auxiliary transportation robot in real time; the autonomous navigation calculation module is used for completing bottom path planning according to the current position and the target point, completing track tracking and autonomous obstacle avoidance according to the kinematic model and the dynamic model, and calculating a corresponding motion control instruction;
and the execution unit comprises a servo motor and an execution controller and is used for executing corresponding actions according to the motion control instruction sent by the operation unit to complete the track tracking of the robot.
5. A coal mine underground auxiliary transportation robot roadway co-location system as recited in claim 3, wherein: the robot end communication unit, the roadway end communication unit and the robot multi-source information fusion navigation device communication unit all comprise an internal communication module and an external communication module, wherein the internal communication module adopts optical fiber communication, and the external communication module adopts 5G communication;
the internal communication module is used for data transmission between the roadway end sensing unit and the roadway end edge computing unit, data transmission between the robot end sensing unit and the robot end computing unit, and data transmission between the natural semantic landmark auxiliary positioning device and the robot multi-source information fusion navigation module, and between the roadway end identification positioning device and the mine auxiliary transportation monitoring platform;
the external communication module is used for data transmission of the roadway end identification positioning device and the robot multi-source information fusion navigation module and transmission of positioning information between the natural semantic landmark auxiliary positioning device and the robot multi-source information fusion navigation module and the mine auxiliary transportation monitoring platform.
6. The coal mine underground auxiliary transportation robot road co-location system according to claim 1, wherein: the mine auxiliary transportation monitoring platform comprises a monitoring unit, an auxiliary transportation scheduling unit and a monitoring end communication unit;
The monitoring unit comprises a video monitoring module, a positioning visualization module and a scene model database, wherein the video monitoring module is used for carrying out real-time target identification based on an image identification network according to a video stream acquired by a camera at a roadway end and carrying out real-time monitoring on the auxiliary transportation robot; the positioning visualization module is used for summarizing the overall information of the roadway on the data stream layer surface, constructing a roadway 3D point cloud model and dynamically updating the six-degree-of-freedom pose of the auxiliary transport robot;
the auxiliary transportation scheduling unit is used for completing the monitoring and scheduling of the auxiliary transportation robot according to the multi-source information fusion positioning information of the robot and the requirements of the auxiliary transportation tasks;
the monitoring end communication unit adopts optical fiber and WiFi communication and is used for receiving positioning information of the roadway end identification positioning device and the natural semantic landmark auxiliary positioning device and finishing information interaction with the robot multi-source information fusion navigation device.
7. The coal mine underground auxiliary transportation robot road co-location system according to claim 1, wherein:
the object coordinate system calibration method in the step S1 is that visual labels are placed around an object to be calibrated, data acquisition is carried out by surrounding the object to be calibrated through a camera, after object information is completely acquired, depth information of each frame is converted into point clouds through internal parameter transformation, the point clouds of each frame are projected to an initial frame to realize three-dimensional reconstruction of a natural semantic landmark, an object CAD model is obtained, and then external parameters of the initial frame camera coordinate system are calibrated through a total station and a calibration plate, so that calibration of the natural semantic landmark CAD model, namely, calibration of the object coordinate system is completed;
The method for calibrating the external parameters of the initial frame camera coordinate system through the total station and the calibration plate specifically comprises the following steps: the method comprises the steps of reading characteristic point coordinates of a calibration plate by using a total station, obtaining a calibration plate image and a depth image by using a robot end camera, operating by using an OpenCV library corner extraction function, and obtaining three-dimensional coordinates of corners under a camera coordinate system by internal reference transformation of depth values of corresponding positions; measuring coordinates of the corner points under the world coordinate system by using the total station, and obtaining pose transformation of the camera coordinate system and the world coordinate system by matching corresponding characteristic points to finish external parameter calibration of the camera coordinate system;
the positioning method of the six-degree-of-freedom pose estimation based on the voting method in the step S2 is as follows: after calibration of a roadway camera under an absolute coordinate system is achieved by using a total station, a robot CAD model is obtained, semantic segmentation is carried out on RGB color images and point cloud information of the auxiliary transportation robot through a pre-trained semantic segmentation network, mask images and mask point clouds of the auxiliary transportation robot are extracted, depth values of corresponding positions in depth images of the auxiliary transportation robot are converted into point cloud data based on camera internal parameters according to the mask images; estimating the pose of the auxiliary transportation robot by utilizing a six-degree-of-freedom pose estimation deep learning network according to the segmented image and the point cloud information to obtain the six-degree-of-freedom pose of the auxiliary transportation robot with confidence, optimizing the pose with highest confidence, and finally obtaining the positioning information of the robot under an absolute coordinate system based on pose transformation of a robot object coordinate system and a camera coordinate system and pose transformation of the camera coordinate system and the absolute coordinate system;
The auxiliary positioning method based on the natural semantic signpost in the step S3 comprises the following steps: after the calibration of a natural semantic landmark object coordinate system is realized by using a total station, a natural semantic landmark CAD model is obtained, the characteristic extraction is carried out by calculating a point characteristic histogram of the natural semantic landmark CAD model through offline processing, and then a color image and point cloud of the natural semantic landmark of the current frame are segmented on line by using a pre-trained semantic segmentation network to obtain an image mask and a point cloud mask; according to the mask image obtained by segmentation, converting the depth value of the corresponding position in the depth image of the auxiliary transport robot into point cloud information based on camera internal parameters; extracting features of the natural semantic landmark point cloud information, primarily estimating the pose of the current frame by matching point pairs of the natural semantic landmark point cloud of the current frame and the features of the natural semantic landmark CAD model, performing pose optimization on the primarily estimated pose again by an ICP algorithm to obtain pose transformation of the robot relative to a natural semantic landmark object coordinate system, and finally obtaining the pose of the robot with six degrees of freedom under an absolute coordinate system by the pose transformation.
8. The coal mine underground auxiliary transportation robot road co-location system according to claim 7, wherein: in step S3, when the illumination of the semantic segmentation network in the underground coal mine fails to reach the set condition, noise is likely to be generated, so that the segmentation accuracy of the semantic segmentation network is reduced, and a false recognition phenomenon occurs, thereby affecting the positioning accuracy.
CN202310814856.1A 2023-07-05 2023-07-05 Vehicle road co-location system and method for underground coal mine auxiliary transportation robot Active CN116518984B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310814856.1A CN116518984B (en) 2023-07-05 2023-07-05 Vehicle road co-location system and method for underground coal mine auxiliary transportation robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310814856.1A CN116518984B (en) 2023-07-05 2023-07-05 Vehicle road co-location system and method for underground coal mine auxiliary transportation robot

Publications (2)

Publication Number Publication Date
CN116518984A CN116518984A (en) 2023-08-01
CN116518984B true CN116518984B (en) 2023-09-08

Family

ID=87390762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310814856.1A Active CN116518984B (en) 2023-07-05 2023-07-05 Vehicle road co-location system and method for underground coal mine auxiliary transportation robot

Country Status (1)

Country Link
CN (1) CN116518984B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116698018B (en) * 2023-08-08 2023-10-13 山西戴德测控技术股份有限公司 Navigation positioning auxiliary device and coal mine tunnel navigation positioning system
CN117308900B (en) * 2023-11-30 2024-02-09 中国矿业大学 Underground transport vehicle movement measurement system and carrying traffic state simulation and monitoring method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111735445A (en) * 2020-06-23 2020-10-02 煤炭科学研究总院 Monocular vision and IMU (inertial measurement Unit) integrated coal mine tunnel inspection robot system and navigation method
CN113485325A (en) * 2021-06-16 2021-10-08 重庆工程职业技术学院 SLAM mapping and autonomous navigation method for underground coal mine water pump house inspection robot
CN115100622A (en) * 2021-12-29 2022-09-23 中国矿业大学 Method for detecting travelable area and automatically avoiding obstacle of unmanned transportation equipment in deep limited space
CN115256414A (en) * 2022-07-21 2022-11-01 中国矿业大学 Mining drilling robot and coupling operation method of mining drilling robot and geological and roadway models
CN115454057A (en) * 2022-08-24 2022-12-09 中国矿业大学 Digital twin intelligent control modeling system and method for coal mine robot group
WO2022257801A1 (en) * 2021-06-09 2022-12-15 山东大学 Slam-based mobile robot mine scene reconstruction method and system
CN115540858A (en) * 2022-09-13 2022-12-30 天地(常州)自动化股份有限公司 High-precision underground vehicle positioning method and system thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111735445A (en) * 2020-06-23 2020-10-02 煤炭科学研究总院 Monocular vision and IMU (inertial measurement Unit) integrated coal mine tunnel inspection robot system and navigation method
WO2022257801A1 (en) * 2021-06-09 2022-12-15 山东大学 Slam-based mobile robot mine scene reconstruction method and system
CN113485325A (en) * 2021-06-16 2021-10-08 重庆工程职业技术学院 SLAM mapping and autonomous navigation method for underground coal mine water pump house inspection robot
CN115100622A (en) * 2021-12-29 2022-09-23 中国矿业大学 Method for detecting travelable area and automatically avoiding obstacle of unmanned transportation equipment in deep limited space
CN115256414A (en) * 2022-07-21 2022-11-01 中国矿业大学 Mining drilling robot and coupling operation method of mining drilling robot and geological and roadway models
CN115454057A (en) * 2022-08-24 2022-12-09 中国矿业大学 Digital twin intelligent control modeling system and method for coal mine robot group
CN115540858A (en) * 2022-09-13 2022-12-30 天地(常州)自动化股份有限公司 High-precision underground vehicle positioning method and system thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
煤矿移动机器人LiDAR/IMU 紧耦合SLAM 方法;李猛钢等;工矿自动化;第48卷(第12期);第68-78页 *

Also Published As

Publication number Publication date
CN116518984A (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN109945858B (en) Multi-sensing fusion positioning method for low-speed parking driving scene
CN110097553B (en) Semantic mapping system based on instant positioning mapping and three-dimensional semantic segmentation
CN109709801B (en) Indoor unmanned aerial vehicle positioning system and method based on laser radar
CN116518984B (en) Vehicle road co-location system and method for underground coal mine auxiliary transportation robot
JP7073315B2 (en) Vehicles, vehicle positioning systems, and vehicle positioning methods
CN110706248B (en) Visual perception mapping method based on SLAM and mobile robot
CN107741234B (en) Off-line map construction and positioning method based on vision
CN108051002B (en) Transport vehicle space positioning method and system based on inertial measurement auxiliary vision
CN112734852B (en) Robot mapping method and device and computing equipment
CN107967473B (en) Robot autonomous positioning and navigation based on image-text recognition and semantics
CN111060924B (en) SLAM and target tracking method
CN113903011B (en) Semantic map construction and positioning method suitable for indoor parking lot
CN112734765B (en) Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
CN111337943B (en) Mobile robot positioning method based on visual guidance laser repositioning
CN115407357B (en) Low-harness laser radar-IMU-RTK positioning mapping algorithm based on large scene
CN116630394B (en) Multi-mode target object attitude estimation method and system based on three-dimensional modeling constraint
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
WO2022062480A1 (en) Positioning method and positioning apparatus of mobile device
CN114019552A (en) Bayesian multi-sensor error constraint-based location reliability optimization method
CN113188557A (en) Visual inertial integrated navigation method fusing semantic features
Aryal Object detection, classification, and tracking for autonomous vehicle
CN116977628A (en) SLAM method and system applied to dynamic environment and based on multi-mode semantic framework
CN112729318A (en) AGV fork truck is from moving SLAM navigation of fixed position
CN115147344A (en) Three-dimensional detection and tracking method for parts in augmented reality assisted automobile maintenance
Wang et al. 3D-LIDAR based branch estimation and intersection location for autonomous vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant