CN116858219A - Multi-sensor fusion-based pipe robot map building and navigation method - Google Patents

Multi-sensor fusion-based pipe robot map building and navigation method Download PDF

Info

Publication number
CN116858219A
CN116858219A CN202310544354.1A CN202310544354A CN116858219A CN 116858219 A CN116858219 A CN 116858219A CN 202310544354 A CN202310544354 A CN 202310544354A CN 116858219 A CN116858219 A CN 116858219A
Authority
CN
China
Prior art keywords
map
robot
pipeline
magnetic
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310544354.1A
Other languages
Chinese (zh)
Inventor
黄民
雷子禾
唐凯
郎需强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Information Science and Technology University
Original Assignee
Beijing Information Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Information Science and Technology University filed Critical Beijing Information Science and Technology University
Priority to CN202310544354.1A priority Critical patent/CN116858219A/en
Publication of CN116858219A publication Critical patent/CN116858219A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a multi-sensor fusion-based pipeline robot map building and navigation method, which comprises a multi-joint magnetic adsorption pipeline robot, a positioning map building module and an autonomous navigation module, wherein the multi-joint magnetic adsorption pipeline robot comprises a multi-joint magnetic adsorption pipeline robot motion chassis, a monocular camera, a single-line laser radar, an inertial measurement unit and a magnetic induction probe, and the multi-joint magnetic adsorption pipeline robot motion chassis comprises a shell, a driving motor, a control board and a vehicle-mounted battery. According to the method, a positioning map building module obtains the pose of the robot to build a navigation map, an autonomous navigation module performs path planning according to the navigation map generated by the positioning map building module and the position of a target point in the environment, and the multi-joint magnetic adsorption pipeline robot is driven to autonomously move to the target point by fusing multi-sensor data. The method can realize high-precision map construction in the steel ventilating duct by using the multi-joint magnetic suction pipeline robot, improve the automatic navigation repositioning efficiency and finish the automatic navigation.

Description

Multi-sensor fusion-based pipe robot map building and navigation method
Technical Field
The invention relates to the technical field of robots, in particular to a multi-sensor fusion-based pipeline robot map building and navigation method.
Background
With the rapid development of the economy in China, the number of high-rise buildings in cities is increasing, and the popularization of central air conditioners and fresh air systems enables ventilation pipelines to be widely applied to large buildings. In order to ensure the air quality in the building, the ventilation pipeline needs to be regularly detected and cleaned, but special pipeline terrains such as steps, grooves, circular arcs, three-fork curves and the like exist in the ventilation pipeline, so that the problems of low efficiency and the like exist in the process of manually working are caused, and the pipeline robot can well assist in solving the problems.
To realize autonomous operation, the pipeline robot needs to have sensing capability to the surrounding environment and the position of the pipeline robot. Meanwhile, the positioning and mapping technology can enable the robot to explore position information in an unknown environment, provide a motion track and perform scene description on surrounding environments. Due to the characteristics of high environmental similarity and high reflectivity of the pipe wall of the ventilation pipeline, the single-line laser radar is easy to generate mismatching in the environment, the obtained information does not have environment semantic information, the effective navigation obstacle avoidance can not be performed, and the repositioning efficiency and the accuracy of the robot are low in the navigation process; the visual characteristic points of the monocular camera which can be tracked in the pipeline environment are sparse and have no depth information, and accumulated errors are overlarge due to the increase of time, so that the positioning accuracy is greatly reduced.
Therefore, a multi-sensor fusion mapping and navigation solution is needed to solve the mapping and navigation problems of the pipeline robot in the ventilation pipeline.
Disclosure of Invention
In order to overcome the defects, the invention provides a multi-sensor fusion pipeline robot map building and navigation method, which is used for solving the problem of limitation of sensors caused by special structures and materials of ventilation pipelines.
In order to achieve the above purpose, the present invention provides the following technical solutions:
the magnetic adsorption pipeline robot comprises a magnetic adsorption pipeline robot motion chassis, a monocular camera, a single-line laser radar, an inertial measurement unit and a magnetic induction probe, wherein the magnetic adsorption pipeline robot motion chassis comprises a shell, a driving motor, a driving wheel, an auxiliary wheel, a control board and a vehicle-mounted battery.
As a further scheme of the invention: the driving wheel is arranged on the driving motor, the driving wheel is contacted with the pipe wall of the ventilating duct to provide power and adsorption force, the driving wheel comprises a driving wheel hub and permanent magnets, the permanent magnets are arranged in grooves on the left side and the right side of the driving wheel hub, two permanent magnets can be vertically arranged in a special groove in the driving wheel hub, and one permanent magnet is arranged at the other positions; the auxiliary wheel is contacted with the pipe wall of the ventilating duct to lift the adsorption force, and comprises an auxiliary wheel hub and permanent magnets, wherein the permanent magnets are arranged in grooves on the periphery of the auxiliary wheel hub, two permanent magnets can be vertically arranged in a special groove in the driving wheel hub, and one permanent magnet is arranged at the other positions; when the magnetic adsorption pipeline robot walks, the driving wheel and the auxiliary wheel magnetize the pipe wall of the steel ventilating pipeline, and a magnetic mark is left when the positions where the two permanent magnets are arranged at the special grooves of the driving wheel and the auxiliary wheel pass through, wherein the magnetic mark is that the magnetic induction intensity of the pipe wall is higher than that of surrounding areas.
As a further scheme of the invention: the motion chassis of the magnetic adsorption pipeline robot is used for carrying sensor equipment to realize stable operation of the magnetic adsorption pipeline robot, the monocular camera is used for acquiring color information in a pipeline environment, the single-line laser radar is used for acquiring geometric depth information of the environment, the inertial measurement unit is used for acquiring motion information of the magnetic adsorption pipeline robot, and the magnetic induction probe is used for acquiring magnetic field intensity information of a steel ventilating pipeline.
As still further aspects of the invention: the positioning map building module is used for sensing and positioning the magnetic adsorption pipeline robot in an unknown steel ventilating duct environment and building a grid map for navigation, and comprises the following components: the system comprises a magnetic positioning map building module, a laser positioning map building module, a visual positioning map building module and a map alignment algorithm.
As still further aspects of the invention: the magnetic positioning map building module is based on magnetic induction probe composition, magnetic paths and magnetic marks left on the steel pipe wall by the driving wheel and the auxiliary wheel are obtained by using the magnetic induction probe, and the magnetic paths and the magnetic marks are collected to complete the construction of the global magnetic map.
As still further aspects of the invention: the laser positioning mapping module is based on a single-line laser radar and an inertial measurement unit, acquires data of the single-line laser radar and the inertial measurement unit by using a Cartographer algorithm, downsamples the data of each frame of laser radar through a filter, eliminates point cloud data with too small position movement distance or relatively short time interval, then enters a local map to construct and obtain the pose of the magnetic adsorption pipeline robot, then fuses the data of the inertial measurement unit to optimize pose estimation, completes construction of a global two-dimensional grid map,
as still further aspects of the invention: the visual positioning and mapping module is based on monocular camera composition, acquires monocular camera data by using an ORB-SLAM2 algorithm, performs inter-frame matching on ORB characteristic points, realizes positioning and obtaining pose of the magnetic adsorption pipeline robot, constructs a three-dimensional point cloud map and reserves word bag information.
As still further aspects of the invention: the map alignment algorithm comprises the following steps: single-line laser radar and monocular camera calibration, pose replacement algorithm. The single-line laser radar and the monocular camera are calibrated to be in a position relation between calibration sensors, and the pose replacement algorithm is to replace the pose of the magnetic adsorption pipeline robot generated by the visual positioning and mapping module with the pose of the magnetic adsorption pipeline robot of the visual positioning and mapping module in the adjacent time, so that the alignment of the two-dimensional grid map and the three-dimensional point cloud map is realized.
Preferably, the single-line laser radar and monocular camera calibration method comprises the following steps: when the monocular camera acquires the image of the calibration plate, the parameter of the plane of the calibration plate in the camera coordinate system c is pi c =[n c ,d]∈R 4 Wherein n is c ∈R 3 And d is the distance from the origin of the camera coordinate system to the plane. A three-dimensional point on the plane is marked P in the coordinate system of the camera c ∈R 3 The point satisfies n on the plane cT P c +d=0. The rotation and translation transformation matrix from the laser coordinate system 1 to the camera coordinate system c is R cl ,t cl Laser point P in laser coordinate system l When falling on the calibration plate, the expression of the external parameter can be constructed through constraint of points on a plane to be n cT (R cl P l +t cl )+d c =0. The transformation matrix from the camera coordinate system to the laser coordinate system is R cl ,t cl The laser point is transformed from a laser coordinate system to a camera coordinate systemThe plane formed by the two-dimensional laser beam is assumed to be xy plane, i.e. z=0, where there is P l =[x,y,0] T The coordinate system transformation can be written as:
h is estimated as a new unknown quantity,solving H to obtain R cl ,t cl . Assume that the desired rotation matrix is +.>Minimizing the French Luo Beini Usne norm by computing the band constraintTo estimate +.>After the initial value of the external parameter is calculated, the acquired N groups of data are subjected to joint optimization, and then the optimal estimation is obtained. The ith frame of the laser radar has N i The laser points fall on the calibration plate, and meanwhile, the plane equation of the calibration plate corresponding to the ith frame of laser is expressed as +.>The optimization equation is:
preferably, the pose replacement algorithm is to replace the pose of the robot with the current time stamp acquired by the ORB-SLAM2 algorithm in the visual positioning mapping module by using a position "P" generated by the cartograph algorithm in the visual positioning mapping module close to the time stamp in a linear interpolation mode. The pose a of the magnetic adsorption pipeline robot is expressed as (p, q), wherein p is in the form of (x, y, z) translation vectors, and q is a quaternion. For time point t 1 、t 2 、t 3 The corresponding pose is a 1 、a 2 、a 3 Can be expressed as (p) 1 ,q 1 ),(p 2 ,q 2 ),(p 3 ,q 3 ). Use a 1 And a 2 Calculating a 3 Firstly, calculating interpolation coefficient k of the interpolation coefficient as follows:the interpolation theory of translation vector and unit quaternion can be represented by a 1 、a 2 Obtaining the pose a 3 :p 3 =p 1 +k*(p 2 -p 1 )。
As still further aspects of the invention: the autonomous navigation module includes: and the path planning module, the special topography recognition module and the autonomous obstacle surmounting module in the pipeline, so that the magnetic adsorption pipeline robot can finally autonomously move from the current position to the target position in the steel ventilation pipeline environment.
As still further aspects of the invention: the path planning module performs global matching according to current visual information and bag-of-word information, performs pipeline robot repositioning, achieves current position calibration, uses an A-algorithm to start searching a path from a current position to a target point to generate a global path, removes redundant turning points from the generated path, optimizes the path, performs local path planning by using a dynamic window method when the magnetic adsorption pipeline robot interacts with a steel ventilating pipeline environment, achieves dynamic obstacle avoidance, achieves path following by acquiring magnetic induction probe data, and determines that the current path is a walking path when a map is built.
As still further aspects of the invention: and the special topography recognition module in the pipeline performs feature recognition on image information acquired by the monocular camera by using a YOLOv5 algorithm, so that the type recognition of ascending, descending, left turning, right turning and barriers in the steel ventilating pipeline is realized.
As still further aspects of the invention: the autonomous obstacle surmounting module enables the magnetic adsorption pipeline robot to autonomously select proper gait to complete autonomous obstacle surmounting after acquiring the type of the special topography of the current ventilating pipeline output by the special topography recognition module in the pipeline.
A multi-sensor fusion pipeline robot map building and navigation method comprises the following steps:
step one: the single-line laser radar and the monocular camera are calibrated, single-line laser radar and monocular camera data are realized, and the rotation matrix of the single-line laser radar and the rotation matrix of the monocular camera are solved by searching three-dimensional points of the single-line laser radar and two-dimensional points detected by the corresponding monocular camera;
step two: when the magnetic adsorption pipeline robot is placed in an unknown steel ventilation pipeline environment, the magnetic adsorption pipeline robot moves in the steel ventilation pipeline, the driving wheel and the auxiliary wheel magnetize the wall of the steel ventilation pipeline, the magnetic positioning mapping module collects the magnetic paths and the magnetic marks left on the wall of the steel ventilation pipeline by the driving wheel and the auxiliary wheel acquired by the magnetic induction probe, outputs the actual movement path of the magnetic adsorption pipeline robot, and constructs a global magnetic map; the laser positioning mapping module acquires data of a single-line laser radar and an inertial measurement unit, performs downsampling on each frame of laser radar data through a filter, eliminates point cloud data with too small position movement distance or short time interval, then enters a local map to construct and acquire a robot pose, and then fuses the inertial measurement unit data to optimize pose estimation generated by the laser positioning mapping module; the visual positioning and mapping module acquires monocular camera data to perform inter-frame matching on the 0RB characteristic points to realize positioning and obtaining pose of the magnetic adsorption pipeline robot, and word bag information is reserved;
step three: replacing the pose of the magnetic adsorption pipeline robot of the current time stamp acquired in the visual positioning mapping module by using the pose generated in the laser positioning mapping module close to the time stamp in a linear interpolation mode, so that the finally generated three-dimensional point cloud map and the finally generated two-dimensional grid map are aligned;
step four: after a three-dimensional point cloud map and a two-dimensional grid map for navigation are obtained, global matching is carried out on the three-dimensional point cloud map according to current visual information and word bag information, repositioning is completed, current position calibration is achieved, a path planning module starts searching for a path from a current position to a target point to generate a global path, redundant turning points are removed from the generated path, the path is optimized, a multi-joint magnetic absorption pipeline carries out local path planning by using a dynamic window method when a robot interacts with an environment, dynamic obstacle avoidance is achieved, path following is achieved by obtaining magnetic induction probe data, and the path which has been walked when the current path is a map is determined;
step five: the special topography recognition module in the pipeline is used for carrying out feature recognition on the image information acquired by the monocular camera, the special topography information of the ventilating duct is acquired, and after the type of the current special topography of the ventilating duct output by the special topography recognition module in the pipeline is acquired, the magnetic adsorption pipeline robot autonomously selects a proper gait to complete autonomous obstacle surmounting.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a multi-sensor fusion pipeline robot map building and navigation method, which is characterized in that a monocular camera, a single-line laser radar, an inertial measurement unit and a magnetic induction probe are carried, a magnetic path and a magnetic mark of a magnetic adsorption wheel on a steel ventilating duct are acquired by using the magnetic induction probe, the actual motion path of a multi-joint magnetic adsorption pipeline robot is obtained, a relatively mature and high-precision cartograph algorithm is used for combining an ORB-SLAM2 algorithm, so that the map building of the multi-joint magnetic adsorption pipeline robot on the environment of an unknown steel ventilating duct is realized, and the efficiency and the accuracy of the multi-joint magnetic adsorption pipeline robot in autonomous navigation are effectively improved. In the autonomous navigation process, the multi-joint magnetic adsorption pipeline robot relocates according to the three-dimensional point cloud map of the ventilating pipeline generated by the visual positioning and mapping module, and the initial position is calibrated; automatically generating a global path for navigation according to the two-dimensional grid map of the ventilating duct, the initial position and the target point position generated by the laser positioning mapping module; in the navigation process, a dynamic window method is used for carrying out dynamic obstacle avoidance, path following is realized by acquiring magnetic induction probe data, a walking path is determined when a current path is a map building, and a target detection algorithm is used for identifying special topography of the ventilating duct, so that the autonomous obstacle crossing of the pipeline robot is realized. By the method, accurate mapping and autonomous navigation of the multi-joint magnetic adsorption pipeline robot in the pipeline are realized.
Drawings
FIG. 1 is a schematic diagram of a magnetic attraction pipeline robot in a pipeline robot mapping and navigation method based on multi-sensor fusion according to an embodiment of the disclosure;
FIG. 2 is a flow chart of main steps of a multi-sensor fusion-based pipe robot mapping and navigation method according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of the main steps of a method for plumbing robot mapping based on multi-sensor fusion in accordance with an embodiment of the present disclosure;
FIG. 4 is a flow chart of the main steps of a multi-sensor fusion based pipe robot navigation method according to an embodiment of the present disclosure;
the figure shows: a single-line laser radar 1, a monocular camera 2, a shell 3, a magnetic induction probe 4, a driving wheel 5 and an auxiliary wheel 6.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
Referring to fig. 2, fig. 2 is a schematic flow chart of main steps of a method for constructing and navigating a pipeline robot based on multi-sensor fusion, and the method includes: a pipeline robot mapping method based on multi-sensor fusion; a pipeline robot navigation method based on multi-sensor fusion.
The monocular camera 2 is used for obtaining color information in a pipeline environment, the single-wire laser radar 1 is used for obtaining geometric depth information of the environment, the inertia measurement unit is used for obtaining motion information of the multi-joint magnetic adsorption pipeline robot, the magnetic induction probe 4 is used for obtaining magnetic field intensity information of the steel ventilating pipeline magnetized by the driving wheel 5 and the auxiliary wheel 6, the driving wheel 5 is arranged on a driving motor, the driving wheel 5 is in contact with the wall of the ventilating pipeline to provide power and adsorption force, the single-wire laser radar comprises a driving wheel hub and permanent magnets, the permanent magnets are arranged in grooves on the left side and the right side of the driving wheel hub, two permanent magnets can be vertically arranged in special grooves in the driving wheel hub, the rest positions are all provided with one permanent magnet, the permanent magnets are arranged in grooves on one circle of the auxiliary wheel hub, the special grooves in the driving wheel hub can be vertically provided with two permanent magnets, the rest positions are provided with one permanent magnet, when the magnetic adsorption pipeline robot walks, the driving wheel 5 and the auxiliary wheel 6 magnetize the wall of the steel ventilating pipeline, magnetization information left when the positions of the two permanent magnets are arranged in the special grooves of the driving wheel 5 and the auxiliary wheel 6 pass through, the magnetization information comprises magnetization paths and magnetization marks which are high in the surrounding magnetic induction areas.
The multi-sensor fusion-based pipeline robot mapping method is used for positioning and constructing a navigation map of a magnetic adsorption pipeline robot in an unknown steel ventilating duct environment, and comprises the following steps: the navigation map comprises a magnetic positioning map building module, a laser positioning map building module, a visual positioning map building module and a map alignment algorithm, wherein the navigation map comprises: two-dimensional grid map, three-dimensional point cloud map.
The magnetic positioning map building module acquires magnetic paths and magnetic marks left by the driving wheel 5 and the auxiliary wheel 6 on the steel ventilating duct by using the magnetic induction probe 4 to obtain an actual motion path of the magnetic adsorption pipeline robot, the laser positioning map building module is used for positioning the magnetic adsorption pipeline robot based on the single-line laser radar 1 and the inertia measuring unit to construct a two-dimensional grid map, the visual positioning map building module is used for positioning the magnetic adsorption pipeline robot based on the monocular camera 2 to construct a three-dimensional point cloud map, and the map alignment algorithm is used for replacing the pose of the visual positioning map building module with the pose of the laser positioning map building module to realize alignment of the two-dimensional grid map and the three-dimensional point cloud map.
Referring to fig. 3, fig. 3 is a main step flow chart of a multi-sensor fusion-based pipe robot mapping method, wherein the multi-sensor fusion-based pipe robot mapping method comprises the steps that a magnetic positioning mapping module collects magnetic paths and magnetic marks left by driving wheels 5 and auxiliary wheels 6 on a steel ventilating pipe by using magnetic induction probes 4 according to sensor information, outputs magnetic positioning and magnetic maps, subscribes to color image information of a monocular camera 2 according to the sensor information, outputs visual positioning pose and visual word bag information, subscribes to single-line laser radar 1 and inertia measurement unit information according to the sensor information, outputs laser positioning pose and grid map, replaces the pose of the visual positioning mapping module according to the visual positioning pose and the laser positioning pose information, takes the pose of the laser positioning mapping module as a main point cloud map, performs alignment mapping according to a robot pose, a two-dimensional grid map, a three-dimensional point cloud map and an alignment map in the process of magnetic adsorption pipe robot mapping, updates the map according to the robot pose, the two-dimensional grid map and the map alignment map, and updates the map in the map alignment map in the process of the magnetic adsorption pipe robot mapping, and outputs real-time map information according to the local map fusion algorithm.
The magnetic positioning map building module is formed based on the magnetic induction probe 4, and the magnetic induction probe 4 is used for obtaining magnetization information left on the steel pipe wall by the driving wheel 5 and the auxiliary wheel 6, wherein the magnetization information comprises a magnetic path and a magnetic mark, and the magnetic path and the magnetic mark are collected to complete the construction of a global magnetic map.
The laser positioning map building module acquires two sensor data of a laser radar and an inertial measurement unit by using a Cartographer algorithm, downsampling is carried out on each frame of laser radar data through a filter, point cloud data with too small position moving distance or relatively short time interval is removed, if the frame of laser radar data can be updated into a subgraph through filtering, then local map building is carried out to obtain the pose of the magnetic adsorption pipeline robot, then pose estimation is optimized by fusing the inertial measurement unit data, and when the subgraph building is completed and new scanning data is not received any more, the map data are added into a global map building to form global constraint for participating in loop detection of the rear end.
The visual positioning and mapping module acquires monocular camera 2 data by using an ORB-SLAM2 algorithm, selects a proper frame as a key frame through a common view relation among frames, updates the key frame and a local map point, deletes mismatching according to pose, stores the key frame and the map point to be used as a basis for executing repositioning or selecting the key frame, writes the key frame into a key frame list, optimizes pose of the local map point and the key frame through a new key frame, finishes screening and adding the map point, and finally deletes redundant key frames, and reduces accumulated errors through correction of loop detection.
The map alignment algorithm comprises the following steps: the single-line laser radar 1 and the monocular camera 2 are calibrated, and the pose is replaced by an algorithm.
The single-line laser radar 1 and the monocular camera 2 are calibrated to establish the position relation between the sensors, and the pose replacement algorithm is to replace the pose of the magnetic adsorption pipeline robot generated by the visual positioning and mapping module with the pose of the magnetic adsorption pipeline robot of the visual positioning and mapping module in the adjacent time, so that the alignment of the two-dimensional grid map and the three-dimensional point cloud map is realized.
The calibration method of the single-line laser radar 1 and the monocular camera 2 comprises the following steps: when the monocular camera 2 acquires the calibration plate image, the parameter of the calibration plate plane in the camera coordinate system c is pi c =[n c ,d]∈R 4 Wherein n is c ∈R 3 And d is the distance from the origin of the camera coordinate system to the plane. A three-dimensional point on the plane is marked P in the coordinate system of the camera c ∈R 3 The point satisfies n on the plane cT P c +d=0. The rotation and translation transformation matrix from the laser coordinate system l to the camera coordinate system c is R cl ,t cl Laser point P in laser coordinate system l When falling on the calibration plate, the expression of the external parameter can be constructed through constraint of points on a plane to be n cT (R cl P l +t cl )+d c =0. The transformation matrix from the camera coordinate system to the laser coordinate system is R cl ,t cl The laser point is transformed from a laser coordinate system to a camera coordinate systemThe plane formed by the two-dimensional laser beam is assumed to be xy plane, i.e. z=0, where there is P l =[x,y,0] T The coordinate system transformation can be written as:
h is estimated as a new unknown quantity,solving H to obtain R cl ,t cl . Assume that the desired rotation matrix is +.>Minimizing the French Luo Beini Usne norm by computing the band constraintTo estimate +.>After the initial value of the external parameter is calculated, the acquired N groups of data are subjected to joint optimization, and then the optimal estimation is obtained. The ith frame of the laser radar has N i The laser points fall on the calibration plate, and meanwhile, the plane equation of the calibration plate corresponding to the ith frame of laser is expressed as +.>The optimization equation is:
the pose replacement algorithm is to replace the pose of the magnetic adsorption pipeline robot with the current time stamp, which is acquired by the ORB-SLAM2 algorithm in the visual positioning mapping module, by using the position 'P' generated by the Cartograph algorithm in the visual positioning mapping module close to the time stamp in a linear interpolation mode. The pose a of the magnetic adsorption pipeline robot is expressed as (p, q), wherein p is in the form of (x, y, z) translation vectors, and q is a quaternion. For time point t 1 、t 2 、t 3 The corresponding pose is a 1 、a 2 、a 3 Can be expressed as (p) 1 ,q 1 ),(p 2 ,q 2 ),(p 3 ,q 3 ). Use a 1 And a 2 Calculating a 3 Firstly, calculating interpolation coefficient k of the interpolation coefficient as follows:the interpolation theory of translation vector and unit quaternion can be represented by a 1 、a 2 Obtaining the pose a 3 :p 3 =p 1 +k*(p 2 -p 1 )。
The pipeline robot navigation method based on multi-sensor fusion comprises the following steps: the system comprises a path planning module, a special terrain identification module in the pipeline and an autonomous obstacle surmounting module.
Referring to fig. 4, fig. 4 is a main step flow chart of a multi-sensor fusion-based pipeline robot navigation method, wherein the multi-sensor fusion-based pipeline robot navigation flow chart is that a magnetic adsorption pipeline robot performs global matching according to visual information and word bag information to obtain a repositioning current position, a path planning module is used for planning a global path from the current position to a target position based on a two-dimensional grid map, a local path of the pipeline robot is dynamically adjusted according to obstacles in the autonomous navigation process of the magnetic adsorption pipeline robot, path following is realized by acquiring data of a magnetic induction probe 4, a walking path is determined when the current path is a map construction, characteristic recognition is performed on special topography of a pipeline by using a YOLOv5 algorithm, and the magnetic adsorption pipeline robot autonomously selects a proper gait planning according to the topography type to complete obstacle surmounting, so that the magnetic adsorption pipeline robot can finally autonomously navigate from the current position to the target position in a ventilation pipeline environment.
The path planning module performs global matching according to the current visual information and the word bag information output by the visual positioning and mapping module, achieves repositioning of the magnetic adsorption pipeline robot in the environment, achieves current position calibration, starts searching a path from the current position to a target point by using an A-type algorithm to generate a global path, removes redundant turning points for the generated path, optimizes the path, performs local path planning by using a dynamic window method when the magnetic adsorption pipeline robot interacts with the environment, achieves dynamic obstacle avoidance, achieves path following by acquiring data of the magnetic induction probe 4, and determines that the current path is a walking path when mapping.
The special topography recognition module in the pipeline performs feature recognition on the image information acquired by the monocular camera 2 by using a YOLOv5 algorithm, so that semantic information of an object and the position of the object in the image can be recognized, and the semantic information of the object comprises the types of ascending, descending, left turning, right turning and obstacle in the ventilation pipeline.
The autonomous obstacle surmounting module enables the magnetic adsorption pipeline robot to autonomously select proper gait to complete autonomous obstacle surmounting after acquiring the type of the special topography of the current ventilating pipeline output by the special topography recognition module in the pipeline.
The embodiments of the present invention have been described in detail above with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made to these embodiments, including the components, without departing from the principles and spirit of the invention, yet fall within the scope of the invention.

Claims (8)

1. The invention provides a multi-sensor fusion-based pipeline robot map building and navigation method, which comprises a multi-joint magnetic adsorption pipeline robot, a positioning map building module and an autonomous navigation module, wherein the multi-joint magnetic adsorption pipeline robot comprises a multi-joint adsorption pipeline robot motion chassis, a monocular camera (2), a single-line laser radar (1), an inertia measurement unit and a magnetic induction probe (4), and the multi-joint adsorption pipeline robot motion chassis comprises a shell (3), a driving motor, a driving wheel (5), an auxiliary wheel (6), a control board and a vehicle-mounted battery.
2. The multi-sensor fusion-based pipe robot mapping and navigation method of claim 1, wherein the method comprises the following steps: the single-line laser radar (1) is used for acquiring environmental geometric depth information, the inertia measurement unit is used for acquiring motion information of the multi-joint magnetic adsorption pipeline robot, the magnetic induction probe (4) is used for acquiring magnetic field intensity information of a steel ventilating pipeline magnetized by the driving wheel (5) and the auxiliary wheel (6), the driving wheel (5) is arranged on a driving motor, the driving wheel (5) is in contact with the wall of the ventilating pipeline to provide power and adsorption force, the single-line laser radar (1) comprises a driving wheel hub and permanent magnets, the permanent magnets are arranged in grooves on the left side and the right side of the driving wheel hub, two permanent magnets can be vertically arranged in special grooves in the driving wheel hub, the other positions are all provided with a permanent magnet, the auxiliary wheel (6) and the driving wheel hub can be vertically provided with two permanent magnets, the other positions are provided with one permanent magnet, the driving wheel (5) and the auxiliary wheel (6) can be magnetized by the wall of the steel ventilating pipeline when the driving wheel (5) and the auxiliary wheel (6) are in running, the two permanent magnets are arranged at the positions of the positions around the ventilating pipeline hub, and the magnetic induction area is marked by the magnetic induction area.
3. The multi-sensor fusion-based pipe robot mapping and navigation method of claim 1, wherein the method comprises the following steps: the positioning map building module comprises a magnetic positioning map building module, a laser positioning map building module, a visual positioning map building module and a map alignment algorithm, wherein the magnetic positioning map building module is formed based on a magnetic induction probe (4), a magnetic path and a magnetic mark which are reserved on a steel pipe wall by a driving wheel (5) and an auxiliary wheel (6) are obtained by using the magnetic induction probe (4), the magnetic path and the magnetic mark are collected, a global magnetic map is built, the laser positioning map building module is used for positioning the multi-joint magnetic adsorption pipeline robot in an unknown ventilation pipeline environment and building a two-dimensional grid map for navigation, the visual positioning map building module is used for building a three-dimensional point cloud map and word bag information for navigation repositioning of the multi-joint magnetic adsorption pipeline robot in the unknown ventilation pipeline environment, and the map alignment algorithm is used for aligning the two-dimensional grid map output by the laser positioning map building module with the three-dimensional point cloud map output by the visual positioning map building module.
4. The multi-sensor fusion-based pipe robot mapping and navigation method of claim 1, wherein the method comprises the following steps: the positioning map building module comprises a path planning module and a special topography identification module in a pipeline, wherein the path planning module is used for generating a global path from a starting point to a target point by the pipeline robot, local path planning is carried out according to local obstacles in the autonomous navigation process of the pipeline robot to realize autonomous obstacle avoidance, path following is realized by acquiring data of a magnetic induction probe (4), the current path is determined to be a walking path when a map is built, the special topography identification module in the pipeline is used for identifying the special topography type existing in the pipeline by the pipeline robot, and the autonomous obstacle crossing module is used for realizing autonomous obstacle crossing by the pipeline robot according to the special topography type.
5. The multi-sensor fusion-based pipe robot mapping and navigation method of claim 3, wherein: the map alignment algorithm comprises a single-line laser radar (1) and monocular camera (2) calibration and pose replacement algorithm. The single-line laser radar (1) and the monocular camera (2) are calibrated to establish a position corresponding relation between the single-line laser radar (1) and the monocular camera (2), and the pose replacement algorithm is to replace the pose of the robot generated by the visual positioning mapping module with the pose of the robot of the visual positioning mapping module in the adjacent time, so that the alignment of the two-dimensional grid map and the three-dimensional point cloud map is realized.
6. The multi-sensor fusion-based pipe robot mapping and navigation method of claim 5, wherein the method comprises the following steps: the calibration method of the single-line laser radar (1) and the monocular camera (2) is to solve the rotation matrix of the single-line laser radar (1) and the monocular camera (2) by searching three-dimensional points of the single-line laser radar (1) and two-dimensional points detected by the corresponding monocular camera (2).
7. The multi-sensor fusion-based pipe robot mapping and navigation method of claim 5, wherein the method comprises the following steps: the pose replacement algorithm method is to replace the pose of the robot with the current time stamp acquired by the ORB-SLAM2 algorithm in the visual positioning mapping module by using the position 'P' generated by the Cartograph algorithm in the laser positioning mapping module close to the time stamp in a linear interpolation mode, and the pose a of the mobile robot is expressed as that P in (P, q) is in the form of (x, y, z) translation vectors and q-bit quaternions. For time point t 1 、t 2 、t 3 The corresponding pose is a 1 、a 2 、a 3 Can be expressed as (p) 1 ,q 1 ),(p 2 ,q 2 ),(p 3 ,q 3 ) Use a 1 And a 2 Calculating a 3 Firstly, calculating interpolation coefficient k of the interpolation coefficient as follows:the interpolation theory of translation vector and unit quaternion can be represented by a 1 、a 2 Obtaining the pose a 3 :p 3 =p 1 +k*(p 2 -p 1 )。
8. The multi-sensor fusion-based pipe robot mapping and navigation method of claim 5, wherein the method comprises the following steps: the method comprises the following steps:
step one: the single-line laser radar (1) and the monocular camera (2) are calibrated, data of the single-line laser radar (1) and the monocular camera (2) are achieved, and a rotation matrix of the single-line laser radar (1) and the monocular camera (2) is solved by searching three-dimensional points of the single-line laser radar (1) and two-dimensional points detected by the corresponding monocular camera (2);
step two: when the multi-joint magnetic adsorption pipeline robot is placed in an unknown steel ventilation pipeline environment, the magnetic adsorption wheel magnetizes the steel ventilation pipeline wall, the magnetic positioning mapping module acquires magnetic paths and magnetic marks left by a driving wheel (5) and an auxiliary wheel (6) on the steel ventilation pipeline wall, the actual motion path of the multi-joint magnetic adsorption pipeline robot is obtained, a global magnetic map is constructed, the laser positioning mapping module acquires data of a single-line laser radar (1) and an inertial measurement unit, each frame of laser radar data is subjected to downsampling through a filter, point cloud data with small position moving distance or short time interval is removed, then the point cloud data enters a local map to obtain a robot pose, the pose estimation generated by the laser positioning mapping module is optimized by fusing the inertial measurement unit data, the visual positioning mapping module acquires monocular camera (2) data, and performs inter-frame matching on ORB characteristic points to realize positioning of the multi-joint magnetic adsorption pipeline robot, and word bag information is reserved;
step three: replacing the pose of the multi-joint magnetic adsorption pipeline robot with the current timestamp acquired in the visual positioning mapping module by using the pose generated in the laser positioning mapping module close to the timestamp in a linear interpolation mode, so that the finally generated three-dimensional point cloud map and the finally generated two-dimensional grid map are aligned;
step four: after a three-dimensional point cloud map and a two-dimensional grid map for navigation are obtained, global matching is carried out on the three-dimensional point cloud map according to current visual information and word bag information, repositioning is completed, current position calibration is achieved, a path planning module starts searching for a path from a current position to a target point to generate a global path, redundant turning points are removed from the generated path, the path is optimized, a multi-joint magnetic adsorption pipeline carries out local path planning by using a dynamic window method when a robot interacts with an environment, dynamic obstacle avoidance is achieved, path following is achieved by obtaining data of a magnetic induction probe (4), and the path which has been walked when the current path is a map is determined;
step five: the special topography recognition module in the pipeline is used for carrying out feature recognition on the image information acquired by the monocular camera (2), the special topography information of the ventilating pipeline is acquired, and after the type of the current special topography of the ventilating pipeline output by the special topography recognition module in the pipeline is acquired, the multi-joint magnetic adsorption pipeline robot autonomously selects proper gait to complete autonomous obstacle crossing.
CN202310544354.1A 2023-05-16 2023-05-16 Multi-sensor fusion-based pipe robot map building and navigation method Pending CN116858219A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310544354.1A CN116858219A (en) 2023-05-16 2023-05-16 Multi-sensor fusion-based pipe robot map building and navigation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310544354.1A CN116858219A (en) 2023-05-16 2023-05-16 Multi-sensor fusion-based pipe robot map building and navigation method

Publications (1)

Publication Number Publication Date
CN116858219A true CN116858219A (en) 2023-10-10

Family

ID=88234704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310544354.1A Pending CN116858219A (en) 2023-05-16 2023-05-16 Multi-sensor fusion-based pipe robot map building and navigation method

Country Status (1)

Country Link
CN (1) CN116858219A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117433592A (en) * 2023-12-21 2024-01-23 阿塔米智能装备(北京)有限公司 Petroleum pipeline robot

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117433592A (en) * 2023-12-21 2024-01-23 阿塔米智能装备(北京)有限公司 Petroleum pipeline robot
CN117433592B (en) * 2023-12-21 2024-02-20 阿塔米智能装备(北京)有限公司 Petroleum pipeline robot

Similar Documents

Publication Publication Date Title
Gao et al. Review of wheeled mobile robots’ navigation problems and application prospects in agriculture
CN110262546B (en) Tunnel intelligent unmanned aerial vehicle inspection method
CN106840148B (en) Wearable positioning and path guiding method based on binocular camera under outdoor working environment
CN113781582B (en) Synchronous positioning and map creation method based on laser radar and inertial navigation combined calibration
CN110361027A (en) Robot path planning method based on single line laser radar Yu binocular camera data fusion
CN110243358A (en) The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN105487535A (en) Mobile robot indoor environment exploration system and control method based on ROS
CN102402225A (en) Method for realizing localization and map building of mobile robot at the same time
WO2017008454A1 (en) Robot positioning method
Senlet et al. Satellite image based precise robot localization on sidewalks
CN111260751B (en) Mapping method based on multi-sensor mobile robot
Pfaff et al. Towards mapping of cities
CN112518739A (en) Intelligent self-navigation method for reconnaissance of tracked chassis robot
CN113189613B (en) Robot positioning method based on particle filtering
CN115479598A (en) Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN116858219A (en) Multi-sensor fusion-based pipe robot map building and navigation method
CN111958593B (en) Vision servo method and system for inspection operation robot of semantic intelligent substation
CN114413909A (en) Indoor mobile robot positioning method and system
CN111207753A (en) Method for simultaneously positioning and establishing picture under multi-glass partition environment
CN113566808A (en) Navigation path planning method, device, equipment and readable storage medium
CN114114367A (en) AGV outdoor positioning switching method, computer device and program product
CN116429116A (en) Robot positioning method and equipment
CN115540850A (en) Unmanned vehicle mapping method combining laser radar and acceleration sensor
CN116513334A (en) Magnetic adsorption robot device for multi-sensor fusion map building and navigation
Cai et al. Design of Multisensor Mobile Robot Vision Based on the RBPF‐SLAM Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination