CN117760407A - Multi-positioning sensor fused robot positioning and navigation system and method - Google Patents

Multi-positioning sensor fused robot positioning and navigation system and method Download PDF

Info

Publication number
CN117760407A
CN117760407A CN202311770428.XA CN202311770428A CN117760407A CN 117760407 A CN117760407 A CN 117760407A CN 202311770428 A CN202311770428 A CN 202311770428A CN 117760407 A CN117760407 A CN 117760407A
Authority
CN
China
Prior art keywords
positioning
robot
module
data
navigating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311770428.XA
Other languages
Chinese (zh)
Inventor
单长旺
周华良
苏战涛
卢璐
鲍科著
苏廷
刘毅
代莹
黄进
王高明
吕浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nari Technology Co Ltd
NARI Nanjing Control System Co Ltd
Original Assignee
Nari Technology Co Ltd
NARI Nanjing Control System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nari Technology Co Ltd, NARI Nanjing Control System Co Ltd filed Critical Nari Technology Co Ltd
Priority to CN202311770428.XA priority Critical patent/CN117760407A/en
Publication of CN117760407A publication Critical patent/CN117760407A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a multi-positioning sensor fused robot positioning and navigation system and a method thereof, wherein the system comprises a data processing module, a positioning module, a motion module, an identification module, a perception module, a wireless communication module and a signal control module; the data processing module is connected with the positioning module, the identification module, the sensing module, the wireless communication module and the signal control module, and the positioning module is connected with the movement module; the positioning module comprises a Beidou RTK positioning module, a laser navigation positioning module, an inertial navigation positioning module and a driver positioning module. According to the invention, the positioning and navigation system is fused by a plurality of positioning sensors, a Kalman filtering algorithm is expanded to fuse a plurality of positioning data, positioning advantages of each sensor are integrated, and the fused positioning information is output, so that the stability and accuracy of robot navigation are effectively improved; the robot can navigate and position in a large scene, the positioning accuracy and precision can be improved, the stability of the robot is improved, and the automatic and reliable inspection of the transformer substation robot is realized.

Description

Multi-positioning sensor fused robot positioning and navigation system and method
Technical Field
The invention relates to a robot positioning and navigation system and a method, in particular to a multi-positioning sensor fused robot positioning and navigation system and a method.
Background
The substation is taken as an important component of the power system, the substation inspection is a basic link in operation, maintenance and repair of the power grid, is an important component in daily operation, maintenance and work of the substation, and plays a vital role in safe and stable operation of the substation. The substation inspection is to grasp the running condition of equipment through inspection of the equipment in the substation, discover equipment defects and potential safety hazards in time so as to eliminate the defects and the potential safety hazards in time, prevent the occurrence of electric safety accidents in the substation, and is an effective measure for guaranteeing the safe and reliable running of the equipment in the substation.
The substation inspection robot realizes intelligent inspection of substation equipment and real-time input and intelligent analysis of inspection data based on navigation positioning, video identification, infrared temperature measurement and other technologies. The intelligent inspection robot for the transformer substation is applied to indoor and outdoor inspection of the transformer substation, the robot can freely travel on various roads of the transformer substation in a trackless navigation mode, and is matched with technologies such as infrared temperature measurement, image recognition and noise acquisition to perform conventional detection on primary equipment in the transformer substation, and acquired image, video information, temperature, humidity, air pressure and other data are transmitted to a remote background in real time, so that real-time remote monitoring is realized. Compared with manual inspection, the robot inspection has a more standard inspection mechanism, the collected data is more reliable and accurate, the feedback is more timely, the potential safety hazard of equipment can be detected by the robot, and the robot can work normally in various bad weather. The robot saves manpower, effectively improves the inspection efficiency of the transformer substation, and also ensures the personal safety of practitioners.
The traditional substation inspection robot mostly adopts a single positioning mode, such as magnetic navigation, laser navigation, satellite positioning, UWB positioning, wiFi positioning and the like, when the scene scale of the substation is large and the environment is complex, the types of equipment in the substation are multiple and have great similarity, yaw, navigation and other conditions occur when the single positioning mode is adopted, and the robot cannot recover by itself, so that the technical requirement of operation of the substation inspection robot cannot be met.
Disclosure of Invention
The invention aims to: the invention aims to provide a multi-positioning sensor fused robot positioning and navigation system and method, which realize real-time stable automatic positioning and navigation of a robot in a transformer substation by fusing four positioning data, namely Beidou RTK positioning, 3D laser navigation positioning, inertial navigation positioning and driver positioning.
The technical scheme is as follows: the invention comprises a data processing module, a positioning module, a motion module, an identification module, a perception module, a wireless communication module and a signal control module; the data processing module is connected with the positioning module, the identification module, the sensing module, the wireless communication module and the signal control module, and the positioning module is connected with the movement module; the positioning module comprises a Beidou RTK positioning module, a laser navigation positioning module, an inertial navigation positioning module and a driver positioning module.
A multi-positioning sensor fusion robot positioning and navigation method comprises the following steps: building a multi-sensor fusion substation inspection robot platform; calibrating parameters of a three-dimensional laser radar, a Beidou RTK positioning terminal, an inertial navigation module and a driver on a robot carrier; constructing a three-dimensional point cloud map of an operation environment, and loading the three-dimensional point cloud map into a navigation system; positioning and navigation control are carried out on the robot; and (5) performing automatic control of robot positioning navigation.
The parameters of the three-dimensional laser radar, the Beidou RTK positioning terminal, the inertial navigation module and the driver on the robot carrier are calibrated; the method specifically comprises the following steps: calibrating base coordinates of a robot body, and constructing a base standard system of the robot by taking a robot mounting base as a reference; sequentially defining a Beidou RTK positioning terminal, a 3D laser radar, high-precision inertial navigation and a driver coordinate; and constructing a transformation relation of each coordinate system.
The construction of the three-dimensional point cloud map of the running environment and loading the three-dimensional point cloud map into a navigation system specifically comprises the following steps: acquiring a three-dimensional coordinate of the robot through the Beidou RTK positioning terminal; the laser positioning data are synchronized with the Beidou RTK positioning data; starting a robot laser composition program by taking a starting point as a coordinate origin, and controlling the robot to move to construct a global map of an operation environment; and generating a point cloud map according to the recorded three-dimensional point cloud information.
The starting point is used as a coordinate origin, a robot laser composition program is started, and the robot motion is controlled to construct a global map of an operation environment, specifically: taking Beidou RTK positioning data as pose constraint conditions, and taking inertial navigation and an encoder as relative constraint conditions to obtain motion data of the robot; receiving and storing three-dimensional laser point cloud data in real time, and recording information such as a motion trail of a robot; carrying out real-time processing on the point cloud data of each frame to remove distortion and repeat point clouds; and performing closed loop detection according to the constraint conditions of the multiple sensors to finish point cloud data registration and splicing.
The robot is positioned and navigated, and the method specifically comprises the following steps: configuring a plurality of positioning sensors by redundant complementary combination; the method comprises the steps of uniformly calibrating and fusing the received multi-type and multi-scale sensors through an intelligent optimization algorithm, and performing targeted error correction and weight fusion adjustment; based on an extended Kalman filtering framework, constructing a multi-sensor data fusion positioning algorithm, taking three sensing type sensors of an IMU, an odometer and a Beidou RTK as main bodies and taking laser radar point cloud data as a correction means; and correcting the error of the internal sensing type sensor in real time by adopting a tight coupling mode, and updating the iterative fusion estimated optimal pose by adopting a loose coupling mode.
The automatic control of robot positioning navigation is specifically as follows: loading a three-dimensional point cloud map of a substation environment into a navigation system; setting pose data of the robot inspection targets, and recording all the inspection targets and road traffic information; according to the loaded inspection map and inspection target data, the robot calculates the parking position of the robot according to the inspection target data, automatically plans the running path of the robot and utilizes a navigation algorithm to run to the parking point to finish inspection of the target.
The automatic control of robot positioning and navigation is carried out, and the automatic control comprises the following steps:
and judging the initial position of the robot: absolute position information of the robot is obtained through the Beidou RTK positioning terminal and is matched with the map position;
through the positioning of the laser radar, the robot rotates for a circle to determine surrounding environment information, and the accurate position of the robot on the map is determined;
completing path planning on the three-dimensional point cloud map according to pose information of the inspection target point;
and accurately navigating the robot to the inspection point through a multi-sensor fusion algorithm.
The multi-sensor fusion algorithm fuses the data of each sensor by adopting an extended Kalman filtering algorithm, integrates the positioning advantages of each sensor, and outputs the fused positioning information.
The multi-sensor fusion algorithm specifically comprises:
carrying out first time extended Kalman filtering algorithm fusion on wheel type odometer information and imu information of the robot;
the absolute pose information of the Beidou RTK is used for carrying out second time extended Kalman filtering algorithm fusion on the odometer and imu data of the robot;
positioning data after the Beidou RTK, the odometer and the imu are fused, and the importance sampling and resampling process of the MCL algorithm is optimized by combining with a positioning strategy of Monte Carlo particle filtering based on a point cloud map;
particles in Monte Carlo positioning are sampled in a prior test map in normal distribution according to the fused posterior pose data and variance information; fusing positioning information to serve as a priori pose of Monte Carlo positioning;
and updating the particle weight according to the fusion data.
The beneficial effects are that: according to the invention, the multi-positioning sensor fusion positioning and navigation system is used for fusing four positioning data including Beidou RTK positioning, 3D laser navigation positioning, inertial navigation positioning and driver positioning by adopting an extended Kalman filtering algorithm, positioning advantages of each sensor are integrated, and the fused positioning information is output, so that the stability and accuracy of robot navigation are effectively improved. The invention can navigate and position in large complex or open scene, can improve the positioning accuracy and precision, improve the stability of the robot, and realize the automatic, stable and reliable inspection of the robot of the transformer substation.
Drawings
FIG. 1 is a schematic diagram of a robot architecture according to the present invention;
FIG. 2 is a flow chart of a fusion algorithm of the present invention;
FIG. 3 is a map construction flow chart of the present invention;
fig. 4 is a motion navigation flow chart of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the multi-positioning sensor fusion robot positioning and navigation system of the invention comprises a data processing module, a positioning module, a motion module, an identification module, a perception module, a wireless communication module, a signal control module and the like; the data processing module is connected with the positioning module, the identification module, the sensing module, the wireless communication module and the signal control module, and the positioning module is connected with the movement module. The positioning module comprises a Beidou RTK positioning module, a 3D laser navigation positioning module, an inertial navigation positioning module and a driver positioning module.
The Beidou RTK positioning module comprises a Beidou positioning terminal, an RTK differential data receiving unit and a differential data calculating unit. The Beidou positioning terminal calculates positioning data by receiving a plurality of Beidou satellite positioning data; receiving differential data of the Beidou RTK reference station through an RTK differential data receiving unit, wherein the differential data comprise information such as carrier phase observation values, reference station coordinates and the like; according to the input Beidou satellite data and the differential data of the RTK reference station, a differential data calculation unit is utilized to calculate a carrier phase correction value of the Beidou RTK positioning terminal, and high-precision positioning of the Beidou RTK carried by the robot is realized.
The 3D laser navigation positioning module comprises 16 lines and more laser radars and a data processing unit. The laser radar senses the surrounding environment information of the robot by sending out a laser beam, converts the surrounding environment into three-dimensional laser point cloud data, and realizes the functions of environment three-dimensional point cloud map construction and laser positioning navigation by processing and calculating the point cloud.
The inertial navigation positioning module comprises a high-precision inertial navigation and data processing unit. The high-precision inertial navigation system is a nine-axis inertial navigation system which consists of a three-axis gyroscope, a three-axis accelerometer and a three-axis magnetometer. First, angular velocity is measured by a gyroscope, acceleration is measured by an accelerometer, and geomagnetic field strength and direction are measured by a magnetometer. Then, the attitude angle can be obtained by integrating the angular velocity of the gyroscope, and then the measured values of the accelerometer and the magnetometer are fused, so that the more accurate attitude angle can be obtained. Finally, motion state estimation is performed by using measured values of the gyroscope and the accelerometer.
The drive positioning module comprises a chassis drive and a data processing unit. The driver is used for acquiring motion information (odom data) of the chassis mechanism and controlling the motion of the chassis, and the motion data of the robot chassis are obtained through the control of the driver. The data processing unit is used for calculating the position, the gesture, the movement speed, the angular speed and other data of the robot by recording the movement data of the robot for a period of time and is used for navigation of the robot.
The motion module comprises a steering mechanism, a supporting platform, a motor, wheels and the like and is used for realizing accurate motion control response of the robot.
And a data processing module: the system comprises edge computing hardware, a data receiving unit, a navigation algorithm unit and the like, wherein each sensor is connected with the edge computing device hardware, positioning data of each positioning sensor are obtained through the data receiving unit, each positioning data are fused through the navigation algorithm unit, and the multi-sensor fused positioning and navigation functions are realized.
And an identification module: the intelligent recognition system comprises a visible light camera, an infrared camera, a voiceprint acquisition unit and a recognition algorithm unit, wherein the visible light camera, the infrared camera, the voiceprint acquisition unit and the recognition algorithm unit are used for acquiring visible light pictures, infrared pictures and voiceprint data on site, and realizing the intelligent recognition function of substation equipment.
And a perception module: the system comprises a collision signal sensor, a falling signal sensor, a vision sensor and a data processing unit, and is used for sensing whether the abnormal conditions such as obstacles and subsidence exist in a movement channel in the movement process of the robot, ensuring that the robot can stop in time when encountering obstacles and subsidence road sections, avoiding the obstacles or subsidence road sections and the like.
And a wireless communication module: the robot information interaction system comprises a wireless communication module, a network communication unit and the like, and is used for realizing the information interaction between the robot and a robot host.
The signal control module: the robot lamp comprises a lamp control unit, an on-site alarm unit, a power management unit and the like, and realizes the control functions of robot lamp light, sound, charging and power supply and the like.
As shown in fig. 2, a method for positioning and navigating a multi-positioning sensor fused robot includes the following steps:
(1) Constructing a multi-sensor fusion substation inspection robot platform for providing a basic operation software and hardware environment; the robot device is designed with a four-wheel omnidirectional mechanism, is convenient for the robot to flexibly and reliably move under the operation environment of a transformer substation, and mainly comprises a robot body, a motion chassis, a motion motor, a wheel type odometer, inertial navigation, a laser radar, a Beidou positioning terminal, a collision signal sensor, a drop signal sensor, a vision sensor, a data processing module, other necessary electric and mechanical components and the like.
The robot data processing module is based on a linux operating system, a ROS system is deployed, and an algorithm processing system is used for controlling the robot and detecting and identifying targets. The driver controls the motor to realize autonomous movement of the robot. The collision sensor and the vision sensor are used for detecting obstacles and obstacle types encountered in the movement process of the robot and providing data for subsequent robot obstacle winding. The falling sensor and the vision sensor are used for detecting the subsidence road sections encountered in the movement process of the robot and providing data for the follow-up robot to avoid the subsidence road sections.
(2) Calibrating parameters of a three-dimensional laser radar, a Beidou RTK positioning terminal, an inertial navigation module and a driver on a robot carrier; the method specifically comprises the following steps:
(21) Calibrating base coordinates of a robot body, and constructing a base standard system (x) of the robot by taking a robot mounting base as a reference base ,y base ,z base )。
(22) Sequentially defining a Beidou RTK positioning terminal, a 3D laser radar, high-precision inertial navigation and driver coordinates, and determining the coordinates according to specific position data when the Beidou RTK positioning terminal is installed on a robot body, wherein the coordinates are respectively the Beidou RTK positioning terminal coordinates (x db ,y bd ,z bd ) 3D lidar coordinates (x laser ,y laser ,z laser ) High precision inertial navigation coordinates (x) imu ,y imu ,z imu ) And driver coordinates (x odom ,y odom ,z odom )。
(23) And constructing a transformation relation of each coordinate system, wherein the positioning data of the Beidou RTK positioning terminal takes a WGS-84 (World Geodetic System 1984) coordinate system as a reference to perform coordinate positioning, and in order to realize absolute positioning of the robot, constructing a transformation relation corresponding to each sensor by taking the RTK positioning terminal as a reference to realize calibration of the sensor coordinates of the robot.
(3) Constructing a three-dimensional point cloud map of an operation environment, and loading the three-dimensional point cloud map into a navigation system; as shown in fig. 3, the method specifically includes:
(31) Acquiring a three-dimensional coordinate of the robot under a WGS-84 (World Geodetic System 1984) coordinate system through the Beidou RTK positioning terminal;
(32) The laser positioning data are synchronized with the Beidou RTK positioning data, so that the conversion from a laser positioning coordinate system to a WGS-84 coordinate system is realized;
(33) Starting a robot laser slam composition program by taking a starting point as a coordinate origin, and controlling the robot to move to construct a global map of an operation environment, wherein the method specifically comprises the following steps:
(331) Taking Beidou RTK positioning data as pose constraint conditions, and taking inertial navigation and an encoder as relative constraint conditions to obtain motion data of the robot;
(332) Receiving and storing three-dimensional laser point cloud data in real time, and recording information such as a motion trail of a robot;
(333) Carrying out real-time processing on the point cloud data of each frame to remove distortion and repeat point clouds;
(334) And performing closed loop detection according to the constraint conditions of the multiple sensors to finish point cloud data registration and splicing.
(34) And generating a point cloud map according to the recorded three-dimensional point cloud information.
(4) Positioning and navigation control are carried out on the robot: and by analyzing available positioning modes in a complex environment and corresponding sensor working principles, a plurality of positioning sensors are configured by redundant complementary combination. And uniformly calibrating and fusing the received multi-type and multi-scale sensors through an intelligent optimization algorithm, and performing targeted error correction and weight fusion adjustment. Based on an extended Kalman filtering framework, a multi-sensor data fusion positioning algorithm under a complex environment is constructed, three sensing type sensors of an IMU, an odometer and a Beidou RTK are taken as main bodies, and laser radar point cloud data is taken as a correction means. And correcting the error of the internal sensing type sensor in real time by adopting a tight coupling mode, and updating the iterative fusion estimated optimal pose by adopting a loose coupling mode.
First, consider the system state column vector as:
x k =(x,y,z,pitch,roll,yaw)#1-1
limiting the confidence function of the system state to a gaussian distribution:
wherein the method comprises the steps ofFor mean value->Is covariance.
Let the noise variable omega k And n k Also of Gaussian distribution
ω k ~N(0,Q k )#1-3
n k ~N(0,R k )#1-4
Consider expanding the kalman filter framework:
wherein F is K 、B k Is defined according to a jacobian matrix of a robot driving motion model,is a control matrix.
Defining a matrix Z of sensor measurements k Covariance matrix R k Conversion matrix H k Calculation of KalmanGain:
predicting and updating states:
forming a collection from M particlesRepresenting confidence bel (x t ). And sampling the pose of the robot by using an encoder, inertial navigation and Beidou fusion positioning data model, and determining the weight of each particle by using a range finder measurement model.
And the encoder, inertial navigation and Beidou fused motion model estimates the pose of the particle at the next moment according to the pose data. The model splits the relative motion between the poses at adjacent moments into three basic motions: rotation delta rot1 Rotation delta rot2 And linear movement delta trans . Reading by means of odometerThese three quantities can be calculated. The calculation formula is as follows:
wherein (x ' y ' θ ') T And (x y theta) T Representing the estimated pose of the particle at the current time and at the previous time respectively,and->Representing the initial rotation, the linear motion, and the second rotation, respectively, to which noise is added.
Distance meter measurement model calculates each particle in particle swarm to obtain measurement value Z t According to the probability, updating the weight of the particles, and the probability distribution function is as follows:
wherein z is hit ,z short ,z max ,z rand Respectively four parameters of the model, p hit ,p short ,p max ,p rand Representing the probabilities of different types of noise, respectively.
And then resampling according to the weight of the particles, wherein the number of the particles is not changed after sampling, and the probability of the particles is re-assigned to an average value. And clustering the particles, calculating the weight of each cluster, obtaining the cluster with the largest weight, and taking the pose mean value of the cluster as the final robot pose of the positioning algorithm.
(5) The automatic control of robot positioning navigation is carried out, and the autonomous motion navigation function of the robot is realized: loading a three-dimensional point cloud map of a substation environment into a navigation system; setting pose data of the robot inspection targets, and recording all the inspection targets and road traffic information; according to the loaded inspection map and inspection target data, the robot calculates the parking position of the robot according to the inspection target data, automatically plans the running path of the robot and utilizes a navigation algorithm to run to the parking point to finish inspection of the target. The method specifically comprises the following steps:
(5.1) determining an initial position of the robot: absolute position information of the robot is obtained through the Beidou RTK positioning terminal and is matched with the map position;
(5.2) determining surrounding environment information by the aid of laser radar positioning and one-circle rotation of the robot, and determining the accurate position on a map of the robot;
(5.3) completing path planning on the three-dimensional point cloud map according to pose information of the inspection target point;
(5.4) accurately navigating the robot to a patrol point through a multi-sensor fusion algorithm; the multi-sensor fusion positioning navigation algorithm adopts an extended Kalman filtering algorithm to fuse the data of each sensor, integrates the positioning advantages of each sensor, and outputs the fused positioning information. The method comprises the following steps:
(1) The wheel type odometer information and imu information of the robot are fused by the first time of extended Kalman filtering algorithm, the inertial navigation and the encoder have the advantages of low noise and high transient measurement value precision, and the problem that the angle information is easy to deviate when the odometer operates is solved by the high-precision angle information of the inertial navigation. And the accuracy and the predicted value of the robot pose are improved by utilizing different error characteristics of the robot pose.
(2) The absolute pose information of the Beidou RTK is used for carrying out second time extended Kalman filtering algorithm fusion on the odometer and imu data of the robot; the data of the Beidou positioning terminal is absolute coordinate data taking a WGS-84 (World Geodetic System 1984) coordinate system as a reference; the odometer of the drive is a relative coordinate obtained with reference to an odom0 (odom initial coordinate system) coordinate system. The IMU data is a relative coordinate system obtained with reference to an IMU0 (IMU initial coordinate system) coordinate system. The Beidou positioning terminal carried by the inspection robot can improve the positioning precision to the centimeter level or even the millimeter level, and the requirement of the robot on the positioning precision in the inspection process is met. The obtained positioning data unifies a coordinate system for the odometer and imu data at the same time and corrects each other. Data under an absolute coordinate system of the position information of the whole track of the fused robot is ensured, and accumulated errors are eliminated.
(3) And the importance sampling and resampling process of the MCL algorithm is optimized by combining the positioning data fused by the Beidou RTK, the odometer and the imu and combining the positioning strategy of Monte Carlo particle filtering based on the point cloud map.
(4) Particles in Monte Carlo positioning can be sampled in a prior test map in normal distribution according to the fused posterior pose data and variance information; and secondly, fusing positioning information can be used as the priori pose of Monte Carlo positioning.
(5) And updating the particle weight according to the fusion data, so that the positioned data is more accurate.

Claims (10)

1. The robot positioning and navigation system with the multi-positioning sensor fusion is characterized by comprising a data processing module, a positioning module, a motion module, an identification module, a sensing module, a wireless communication module and a signal control module; the data processing module is connected with the positioning module, the identification module, the sensing module, the wireless communication module and the signal control module, and the positioning module is connected with the movement module; the positioning module comprises a Beidou RTK positioning module, a laser navigation positioning module, an inertial navigation positioning module and a driver positioning module.
2. A positioning and navigation method of a robotic positioning and navigation system implementing the multi-positioning sensor fusion of claim 1, comprising: building a multi-sensor fusion substation inspection robot platform; calibrating parameters of a three-dimensional laser radar, a Beidou RTK positioning terminal, an inertial navigation module and a driver on a robot carrier; constructing a three-dimensional point cloud map of an operation environment, and loading the three-dimensional point cloud map into a navigation system; positioning and navigation control are carried out on the robot; and (5) performing automatic control of robot positioning navigation.
3. The method for positioning and navigating the multi-positioning sensor fused robot positioning and navigating system according to claim 2, wherein the calibration of the three-dimensional laser radar and Beidou RTK positioning terminal, the inertial navigation module and the driver parameters on the robot carrier and the carrier is performed; the method specifically comprises the following steps: calibrating base coordinates of a robot body, and constructing a base standard system of the robot by taking a robot mounting base as a reference; sequentially defining a Beidou RTK positioning terminal, a 3D laser radar, high-precision inertial navigation and a driver coordinate; and constructing a transformation relation of each coordinate system.
4. The method for positioning and navigating a multi-positioning sensor fused robot positioning and navigating system according to claim 3, wherein the constructing the three-dimensional point cloud map of the operating environment and loading the three-dimensional point cloud map into the navigating system specifically comprises: acquiring a three-dimensional coordinate of the robot through the Beidou RTK positioning terminal; the laser positioning data are synchronized with the Beidou RTK positioning data; starting a robot laser composition program by taking a starting point as a coordinate origin, and controlling the robot to move to construct a global map of an operation environment; and generating a point cloud map according to the recorded three-dimensional point cloud information.
5. The method for positioning and navigating a multi-positioning sensor fused robot positioning and navigating system according to claim 4, wherein the starting point is used as the origin of coordinates, the laser composition program of the robot is started, and the robot is controlled to move to construct a global map of the running environment, specifically: taking Beidou RTK positioning data as pose constraint conditions, and taking inertial navigation and an encoder as relative constraint conditions to obtain motion data of the robot; receiving and storing three-dimensional laser point cloud data in real time, and recording information such as a motion trail of a robot; carrying out real-time processing on the point cloud data of each frame to remove distortion and repeat point clouds; and performing closed loop detection according to the constraint conditions of the multiple sensors to finish point cloud data registration and splicing.
6. The method for positioning and navigating a multi-positioning sensor fused robot positioning and navigating system according to claim 2, wherein the positioning and navigating control is performed on the robot, specifically: configuring a plurality of positioning sensors by redundant complementary combination; the method comprises the steps of uniformly calibrating and fusing the received multi-type and multi-scale sensors through an intelligent optimization algorithm, and performing targeted error correction and weight fusion adjustment; based on an extended Kalman filtering framework, constructing a multi-sensor data fusion positioning algorithm, taking three sensing type sensors of an IMU, an odometer and a Beidou RTK as main bodies and taking laser radar point cloud data as a correction means; and correcting the error of the internal sensing type sensor in real time by adopting a tight coupling mode, and updating the iterative fusion estimated optimal pose by adopting a loose coupling mode.
7. The method for positioning and navigating a multi-positioning sensor fused robot positioning and navigating system according to claim 2, wherein the automatic control of robot positioning and navigating is specifically as follows: loading a three-dimensional point cloud map of a substation environment into a navigation system; setting pose data of the robot inspection targets, and recording all the inspection targets and road traffic information; according to the loaded inspection map and inspection target data, the robot calculates the parking position of the robot according to the inspection target data, automatically plans the running path of the robot and utilizes a navigation algorithm to run to the parking point to finish inspection of the target.
8. The method for positioning and navigating a multi-positioning sensor fused robot positioning and navigating system according to claim 7, wherein said performing the robot positioning and navigating automatic control comprises:
and judging the initial position of the robot: absolute position information of the robot is obtained through the Beidou RTK positioning terminal and is matched with the map position;
through the positioning of the laser radar, the robot rotates for a circle to determine surrounding environment information, and the accurate position of the robot on the map is determined;
completing path planning on the three-dimensional point cloud map according to pose information of the inspection target point;
and accurately navigating the robot to the inspection point through a multi-sensor fusion algorithm.
9. The method for positioning and navigating a multi-positioning sensor fusion robot positioning and navigating system according to claim 8, wherein the multi-sensor fusion algorithm fuses the sensor data by using an extended kalman filter algorithm, integrates the positioning advantages of the sensors, and outputs the fused positioning information.
10. The method for positioning and navigating a multi-positioning sensor-fused robot positioning and navigating system according to claim 9, wherein the multi-sensor fusion algorithm specifically comprises:
carrying out first time extended Kalman filtering algorithm fusion on wheel type odometer information and imu information of the robot;
the absolute pose information of the Beidou RTK is used for carrying out second time extended Kalman filtering algorithm fusion on the odometer and imu data of the robot;
positioning data after the Beidou RTK, the odometer and the imu are fused, and the importance sampling and resampling process of the MCL algorithm is optimized by combining with a positioning strategy of Monte Carlo particle filtering based on a point cloud map;
particles in Monte Carlo positioning are sampled in a prior test map in normal distribution according to the fused posterior pose data and variance information; fusing positioning information to serve as a priori pose of Monte Carlo positioning;
and updating the particle weight according to the fusion data.
CN202311770428.XA 2023-12-21 2023-12-21 Multi-positioning sensor fused robot positioning and navigation system and method Pending CN117760407A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311770428.XA CN117760407A (en) 2023-12-21 2023-12-21 Multi-positioning sensor fused robot positioning and navigation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311770428.XA CN117760407A (en) 2023-12-21 2023-12-21 Multi-positioning sensor fused robot positioning and navigation system and method

Publications (1)

Publication Number Publication Date
CN117760407A true CN117760407A (en) 2024-03-26

Family

ID=90313864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311770428.XA Pending CN117760407A (en) 2023-12-21 2023-12-21 Multi-positioning sensor fused robot positioning and navigation system and method

Country Status (1)

Country Link
CN (1) CN117760407A (en)

Similar Documents

Publication Publication Date Title
CN109341706B (en) Method for manufacturing multi-feature fusion map for unmanned vehicle
CN110243358B (en) Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
CN110631593B (en) Multi-sensor fusion positioning method for automatic driving scene
CN110160542B (en) Method and device for positioning lane line, storage medium and electronic device
CN107144285B (en) Pose information determination method and device and movable equipment
CN110702091B (en) High-precision positioning method for moving robot along subway rail
CN110307836B (en) Accurate positioning method for welt cleaning of unmanned cleaning vehicle
CN109282808B (en) Unmanned aerial vehicle and multi-sensor fusion positioning method for bridge three-dimensional cruise detection
CN112518739A (en) Intelligent self-navigation method for reconnaissance of tracked chassis robot
Pfaff et al. Towards mapping of cities
CN111947644B (en) Outdoor mobile robot positioning method and system and electronic equipment thereof
CN110412596A (en) A kind of robot localization method based on image information and laser point cloud
CN113156998A (en) Unmanned aerial vehicle flight control system and control method
CN116560357A (en) Tunnel inspection robot system based on SLAM and inspection control method
CN116358515A (en) Map building and positioning method and device for low-speed unmanned system
CN113405560B (en) Unified modeling method for vehicle positioning and path planning
CN114485658A (en) Device and method for precision evaluation of roadside sensing system
Niu et al. Camera-based lane-aided multi-information integration for land vehicle navigation
US20200033141A1 (en) Data generation method for generating and updating a topological map for at least one room of at least one building
Parra-Tsunekawa et al. A kalman-filtering-based approach for improving terrain mapping in off-road autonomous vehicles
CN117760407A (en) Multi-positioning sensor fused robot positioning and navigation system and method
CN113405555B (en) Automatic driving positioning sensing method, system and device
CN114415655A (en) Inspection robot navigation control method based on improved SLAM
CN113093759A (en) Robot formation construction method and system based on multi-sensor information fusion
Qin et al. Multi-stage perception, positioning and planning for automatic wireless charging of agvs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination