Disclosure of Invention
The invention mainly solves the problem of poor reliability of the positioning method for the robot in the prior art; the robot positioning method with the multi-sensor fusion is provided, and the real-time positioning function of the robot with low cost, high reliability and normal indoor and outdoor work is realized.
The technical problem of the invention is mainly solved by the following technical scheme: a multi-sensor fused robot positioning method comprises the following steps:
s1: establishing an off-line two-dimensional grid map through data information of an inertia measurement unit, a speedometer and a single-line laser radar;
s2: establishing a transformation relation between an offline two-dimensional grid map coordinate system and a global coordinate system;
s3: setting initialization state information of the robot, and acquiring output information of a single-line laser radar, an optical flow sensor and a differential GPS;
s4: calculating to obtain the speed information and the attitude information of the motion of the robot according to the output information of the optical flow sensor;
s5: matching the data of the single-line laser radar with an offline two-dimensional grid map to obtain the current position information and the current attitude information of the robot;
s6: converting the robot position information under the global coordinate system obtained by the differential GPS into an off-line two-dimensional grid map coordinate system through the conversion relation calibrated in the step S2;
s7: predicting the predicted values of the position information, the attitude information and the speed information of the current robot by adopting an omnidirectional motion model;
s8: and (4) based on the predicted value of the state information of the robot in the step (S7), obtaining the optimal estimation value of the current state information of the robot by adopting the observation value obtained by fusing the extended Kalman filtering with the positioning sensor, and taking the estimation value as the final current positioning information of the robot.
Preferably, in step S4, a specific method for obtaining the current position information and the current posture information of the robot is as follows:
s41: acquiring frame data of the image at different moments by using an optical flow sensor vertically installed downwards to obtain the moving speed of the pixel;
s42: the moving speed of the pixels is converted into speed information and attitude information of the motion of the robot.
Preferably, the specific implementation method of step S42 is:
based on the transformation relation between the off-line two-dimensional grid map coordinate system and the global coordinate system and the transformation relation between the pixel coordinate system and the optical flow sensor coordinate system, obtaining:
wherein,
is a transformation matrix of an offline two-dimensional grid map coordinate system and an optical flow sensor coordinate system, and is/is judged>
And
is the variation of the robot under an offline two-dimensional grid map coordinate system>
And &>
Is the value obtained by dividing the focal length f of the camera by dx and dy, which are the actual physical size of the pixel on the photosensitive chip, and/or is based on the value>
、/>
Is the change amount under the pixel coordinate system>
、/>
Is the origin under the pixel coordinate system>
And dividing the variation of the offline two-dimensional grid map coordinate system by the corresponding variation time to obtain the speed information of the robot motion.
Preferably, in step S5, a robot positioning matching based on single line laser radar data under an offline two-dimensional grid map is implemented by using an adaptive monte carlo positioning method.
A multi-sensor fused robotic positioning system, comprising: the map building module and the real-time positioning module; the map building module generates an off-line two-dimensional raster map according to the inertia measurement data, the mileage data and the two-dimensional point cloud data; adopting closed-loop constraint correction pose and map information in the data acquisition process of the off-line two-dimensional grid map; the real-time positioning module: the system is used for acquiring output information of the inertial measurement unit, the odometer, the single-wire laser radar, the optical flow sensor and the differential GPS.
Preferably, the real-time positioning module includes: the initialization positioning unit is used for acquiring initialization pose information after the robot is started; the data receiving and preprocessing unit is used for acquiring output information of the inertial measurement unit, the odometer, the single-line laser radar, the optical flow sensor and the differential GPS, processing data obtained by the optical flow sensor, calculating speed information and attitude information of robot motion, and converting robot position information under a global coordinate system obtained by the differential GPS into a map coordinate system; the robot pose prediction value calculation unit is used for predicting the prediction values of the position information, the pose information and the speed information of the current robot by adopting an omnidirectional motion model; the robot attitude and position observation value calculation unit is used for matching data of the single-line laser radar with an offline two-dimensional grid map in real time to obtain current position information and attitude information of the robot; and the extended Kalman filtering unit is used for fusing output information of the inertial measurement unit, the odometer, the single-line laser radar, the optical flow sensor and the differential GPS by adopting extended Kalman filtering based on the predicted value of the state information of the robot to obtain an estimated value of the current state information of the robot, and the estimated value is used as the current final positioning information of the robot.
A multi-sensor fused robotic positioning device, comprising: the inertia measurement unit is used for acquiring inertia measurement data of the robot; the odometer is used for acquiring the mileage data of the robot; the single-line laser radar is used for acquiring two-dimensional point cloud data of the surrounding environment of the robot platform; the optical flow sensor is used for calculating the optical flow change of the pixels to obtain the displacement change data of the robot in two directions on a plane; and the differential GPS acquires the position relation of the robot in the global coordinate system.
The invention has the beneficial effects that: according to the robot positioning method and system, the positioning sensor comprising the inertia measurement unit, the odometer, the single-line laser radar, the optical flow sensor and the differential GPS is arranged, and the robot is subjected to comprehensive data acquisition, so that the low cost under the premise of considering the positioning accuracy is realized, and the low cost comprises low hardware cost and low storage and calculation cost; the device has high reliability, and can still provide stable and accurate positioning result output under the condition that a single sensor fails; the application range is wide, and the indoor and outdoor normal operation can be realized.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the embodiments of the present invention are further described in detail by the following embodiments in conjunction with the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example (b): a multi-sensor fused robot positioning method is disclosed, as shown in FIG. 1, and comprises the following steps:
s1: establishing an off-line two-dimensional grid map through data information of an inertia measurement unit, a speedometer and a single-line laser radar; the specific implementation is as follows: the method comprises the steps of obtaining inertia measurement data of the robot through an inertia measurement unit, obtaining mileage data of the robot through a milemeter, obtaining two-dimensional point cloud data of the surrounding environment of a robot platform through a single-line laser radar, and generating an off-line two-dimensional grid map. In the process of collecting the map, a closed-loop data collection mode is adopted, and the closed-loop data collection mode is used for correcting the pose and the map information by adopting closed-loop constraint in the process of off-line two-dimensional grid map; the generated off-line two-dimensional grid map comprises map post-processing operations such as grid processing, spatial domain filtering, noise point removal and the like.
S2: establishing a transformation relation between an offline two-dimensional grid map coordinate system and a global coordinate system; the transformation matrix of the two coordinate systems is obtained by calibration
(m denotes an off-line two-dimensional grid map coordinate system, w denotes a global coordinate system).
S3: setting initialization state information of the robot, and acquiring output information of a single-line laser radar, an optical flow sensor and a differential GPS; the initialized state information of the robot can be directly obtained when the robot is started, and the output frequencies of the inertial measurement unit, the odometer, the single-line laser radar, the optical flow sensor and the differential GPS are respectively 100hz, 10hz, 60hz and 1hz.
S4: calculating to obtain the speed information and the attitude information of the motion of the robot according to the output information of the optical flow sensor; the specific method for acquiring the current position information and the attitude information of the robot comprises the following steps:
s41: acquiring frame data of the image at different moments by using an optical flow sensor vertically installed downwards to obtain the moving speed of the pixel;
s42: converting the moving speed of the pixels into speed information and posture information of the motion of the robot;
the specific implementation method of step S42 is:
the transformation formula from the pixel coordinate system to the optical flow sensor coordinate system is as follows:
where u and v are coordinates in a pixel coordinate system,
and &>
Is the value obtained by dividing the focal length f of the camera by dx and dy, which are the actual physical size of the pixel on the photosensitive chip (+)>
,/>
,/>
) Coordinates in the optical flow sensor coordinate system.
Converting equation (1) to obtain:
the differentiation operation on equation (2) is:
setting a transformation matrix from a map coordinate system to an optical flow sensor coordinate system as
And then, the variation of the robot in the map coordinate system is as follows:
based on the formula (4), the variation of the position posture of the optical flow sensor in the map coordinate system can be obtained,
and &>
Is the variation of the robot under an offline two-dimensional grid map coordinate system>
、/>
Is the change amount under the pixel coordinate system>
、/>
The variation is divided by the corresponding variation time to obtain the motion speed information as the origin in the pixel coordinate system.
S5: matching the data of the single-line laser radar with an offline two-dimensional grid map to obtain the current position information and the current attitude information of the robot; and the robot positioning matching based on the single-line laser radar data under the offline two-dimensional grid map is realized by adopting a self-adaptive Monte Carlo positioning method, or the positioning matching of the robot is carried out by adopting a positioning method of KLD sampling.
S6: and converting the position information of the robot under the global coordinate system obtained by the differential GPS in real time into an off-line two-dimensional grid map coordinate system through the conversion relation calibrated in the step S2.
S7: and predicting the predicted values of the position information, the attitude information and the speed information of the current robot by adopting the omnidirectional motion model.
S8: based on the predicted value of the robot state information in the step S7, obtaining the optimal estimation value of the current state information of the robot by adopting an observation value obtained by fusing an extended Kalman filter with a positioning sensor, and taking the estimation value as the final current positioning information of the robot; the observation values here include measurement values directly or indirectly from an inertial measurement unit, a odometer, a single line lidar, an optical flow sensor, a differential GPS.
As shown in fig. 2, the present invention further provides a multi-sensor fused robot positioning system, which includes a map building module 1 and a real-time positioning module 2.
The map building module is used for acquiring inertia measurement data of the robot through the inertia measurement unit, acquiring mileage data of the robot through the odometer, acquiring two-dimensional point cloud data of the surrounding environment of the robot platform through the single-line laser radar and generating an off-line two-dimensional grid map. And in the process of collecting the map, a closed-loop data collection mode is adopted, and the method is used for correcting the pose and the map information by adopting closed-loop constraint in the process of off-line two-dimensional grid map.
And the real-time positioning module is used for acquiring output information of the inertial measurement unit, the odometer, the single-wire laser radar, the optical flow sensor and the differential GPS. And processing data obtained by the optical flow sensor in real time, and calculating to obtain the speed information and the attitude information of the motion of the robot. And matching the data of the single-line laser radar with the offline two-dimensional grid map in real time to obtain the current position information and attitude information of the robot. And converting the position information of the robot under the global coordinate system obtained by the differential GPS into a map coordinate system through the calibrated transformation relation in real time. And predicting the predicted values of the position information, the attitude information and the speed information of the current robot by adopting the omnidirectional motion model. And based on the predicted value of the state information of the robot, adopting the extended Kalman filtering to fuse the observed values obtained by the sensors to obtain the optimal estimation value of the current state information of the robot, and taking the estimation value as the final current positioning information of the robot.
The positioning module comprises an initialization positioning unit 3, a data receiving and preprocessing unit 4, a robot pose predicted value calculating unit 5, a robot pose observed value calculating unit 6 and an extended Kalman filtering unit 7.
The initialization positioning unit is used for acquiring initialization pose information after the robot is started.
And the data receiving and preprocessing unit is used for acquiring output information of the inertial measurement unit, the odometer, the single-line laser radar, the optical flow sensor and the differential GPS, processing data obtained by the optical flow sensor in real time, and calculating to obtain speed information and attitude information of the robot. And converting the robot position information under the global coordinate system obtained by the differential GPS into a map coordinate system in real time.
The robot pose predicted value calculating unit is used for predicting predicted values of position information, attitude information and speed information of the current robot by adopting the omnidirectional motion model.
And the robot pose observation value calculation unit is used for matching the data of the single-line laser radar with the off-line two-dimensional grid map in real time to obtain the current position information and the current attitude information of the robot.
And the extended Kalman filtering unit is used for fusing the observed values obtained by the sensors by adopting extended Kalman filtering based on the predicted value of the state information of the robot to obtain the optimal estimation value of the current state information of the robot, and taking the estimation value as the final current positioning information of the robot.
The invention also provides a robot positioning device with the multi-sensor fusion, which comprises an inertia measurement unit, a speedometer, a single-line laser radar, an optical flow sensor and a differential GPS, wherein the inertia measurement unit is used for acquiring inertia measurement data of the robot; the odometer is used for acquiring the mileage data of the robot; the single-line laser radar is used for acquiring two-dimensional point cloud data of the surrounding environment of the robot platform; the optical flow sensor is used for calculating the optical flow change of the pixels to obtain the displacement change data of the robot in two directions on a plane; and the differential GPS acquires the position relation of the robot in the global coordinate system.
The inertia measurement unit, the odometer, the single-line laser radar, the optical flow sensor and the differential GPS are all connected with the processor, the processor comprises a map construction module and a real-time positioning module of the robot positioning system, and the processor generates an off-line two-dimensional grid map based on data of the inertia measurement unit, the odometer and the single-line laser radar. And generating pose information of the current robot according to inertial measurement unit data, odometer data, a result of matching the single-line laser radar real-time two-dimensional point cloud data with the offline two-dimensional grid map, data of the optical flow sensor and data of the differential GPS device.
The system is also provided with a memory for storing an off-line two-dimensional grid map, the measurement data of each sensor, a robot state predicted value, a robot state observed value and the current positioning information of the robot.
The optical flow sensor is arranged to collect the speed information and the attitude information of the robot, the fusion positioning of the multiple sensors is carried out by combining the other positioning sensors, when other positioning sensors are in fault, the positioning of the robot can still be realized by the operation of the optical flow sensor, meanwhile, the optical flow sensor can be well adapted to the influence of environmental changes such as light, seasons and the like, the precision reduction caused by the light factors can not be caused, and the hardware cost, the software cost and the storage cost of the robot positioning can be reduced on the premise of ensuring the precision.
The above-described embodiments are only preferred embodiments of the present invention, and are not intended to limit the present invention in any way, and other variations and modifications may be made without departing from the spirit of the invention as set forth in the claims.