CN115307646A - Multi-sensor fusion robot positioning method, system and device - Google Patents

Multi-sensor fusion robot positioning method, system and device Download PDF

Info

Publication number
CN115307646A
CN115307646A CN202211219785.2A CN202211219785A CN115307646A CN 115307646 A CN115307646 A CN 115307646A CN 202211219785 A CN202211219785 A CN 202211219785A CN 115307646 A CN115307646 A CN 115307646A
Authority
CN
China
Prior art keywords
robot
information
coordinate system
positioning
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211219785.2A
Other languages
Chinese (zh)
Other versions
CN115307646B (en
Inventor
余小欢
徐露露
黄泽仕
哈融厚
白云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yixun Technology Co ltd
Original Assignee
Zhejiang Guangpo Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Guangpo Intelligent Technology Co ltd filed Critical Zhejiang Guangpo Intelligent Technology Co ltd
Priority to CN202211219785.2A priority Critical patent/CN115307646B/en
Publication of CN115307646A publication Critical patent/CN115307646A/en
Application granted granted Critical
Publication of CN115307646B publication Critical patent/CN115307646B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Abstract

The invention discloses a robot positioning method with multi-sensor fusion, which comprises the following steps: s1: establishing an off-line two-dimensional grid map through data information of an inertia measurement unit, a speedometer and a single-line laser radar; s2: establishing a transformation relation between an offline two-dimensional grid map coordinate system and a global coordinate system; s3: setting initialization state information of the robot, and acquiring output information of a single-line laser radar, an optical flow sensor and a differential GPS; s4: calculating to obtain the speed information and the attitude information of the motion of the robot according to the output information of the optical flow sensor; according to the robot positioning method and system, the positioning sensors including the inertial measurement unit, the odometer, the single-line laser radar, the optical flow sensor and the differential GPS are arranged, the robot is subjected to comprehensive data acquisition, low cost and high reliability are achieved on the premise of considering positioning accuracy, and stable and accurate positioning result output can still be provided under the condition that a single sensor fails.

Description

Multi-sensor fusion robot positioning method, system and device
Technical Field
The invention relates to the technical field of robots, in particular to a robot positioning method, system and device based on multi-sensor fusion.
Background
With the rapid development of artificial intelligence technology, sensor technology and the like, the overall technical level in the field of robots is continuously improved. Various types of robots are increasingly appearing in various fields, and the automation level of the corresponding fields is continuously improved. The positioning technology of the robot is used as a core component of the robot, and the performance of the positioning technology of the robot directly influences the overall performance of the robot to a great extent.
The conventional robot positioning technology mainly includes indoor-oriented robot positioning and outdoor-oriented robot positioning.
CN105411490B proposes a "real-time positioning method for mobile robot and mobile robot". The scheme of fusing a speedometer, a gyroscope and a binocular camera is adopted. The method comprises the steps of firstly obtaining preliminary pose information through the output fusion of a speedometer and a gyroscope, and then obtaining optimized positioning information by using the preliminary pose information as prediction input and combining scene information obtained by a binocular camera and an SLAM algorithm. The scheme can basically meet the positioning requirement in an indoor small scene, but the defects are very obvious: 1. the applicable scenes are limited, and the binocular SLAM algorithm is difficult to acquire effective pose estimation in outdoor scenes, indoor high-dynamic indoor scenes, indoor low-texture scenes or repeated-texture scenes. 2. The online running SLAM algorithm has a great burden on the calculation and power consumption of the robot platform, and further increases the cost of the positioning scheme. 3. The abnormal conditions of sensors such as slippage of a similar odometer and the like cannot be solved.
CN107340522A proposes a laser radar positioning method, device and system, which adopts a scheme of fusing a three-dimensional laser radar, an inertia measurement unit and a milemeter. The method comprises the steps of firstly fusing data of three sensors to obtain a three-dimensional point cloud map of a current scene, and then obtaining optimal pose estimation by combining a pose recurrence value obtained by an inertial measurement unit and a mileometer and a pose observation value obtained by matching real-time data of a three-dimensional laser radar with the three-dimensional point cloud map by adopting a Kalman filtering method. According to the scheme, the three-dimensional laser radar is adopted for data fusion, so that the robot positioning requirement of an outdoor scene can be met to a certain extent, but the following defect points exist: 1. the cost of the scheme is too high, and comprises the use cost of expensive three-dimensional laser radars, the storage cost of dense three-dimensional point cloud maps and the calculation cost of matching between real-time point cloud data and the three-dimensional point cloud maps. 2. The abnormal conditions of sensors such as slippage of a similar odometer cannot be solved.
Disclosure of Invention
The invention mainly solves the problem of poor reliability of the positioning method for the robot in the prior art; the robot positioning method with the multi-sensor fusion is provided, and the real-time positioning function of the robot with low cost, high reliability and normal indoor and outdoor work is realized.
The technical problem of the invention is mainly solved by the following technical scheme: a multi-sensor fused robot positioning method comprises the following steps:
s1: establishing an off-line two-dimensional grid map through data information of an inertia measurement unit, a speedometer and a single-line laser radar;
s2: establishing a transformation relation between an offline two-dimensional grid map coordinate system and a global coordinate system;
s3: setting initialization state information of the robot, and acquiring output information of a single-line laser radar, an optical flow sensor and a differential GPS;
s4: calculating to obtain the speed information and the attitude information of the motion of the robot according to the output information of the optical flow sensor;
s5: matching the data of the single-line laser radar with an offline two-dimensional grid map to obtain the current position information and the current attitude information of the robot;
s6: converting the robot position information under the global coordinate system obtained by the differential GPS into an off-line two-dimensional grid map coordinate system through the conversion relation calibrated in the step S2;
s7: predicting the predicted values of the position information, the attitude information and the speed information of the current robot by adopting an omnidirectional motion model;
s8: and based on the predicted value of the robot state information in the step S7, obtaining the optimal estimation value of the current state information of the robot by adopting the observation value obtained by fusing the extended Kalman filtering and the positioning sensor, and taking the estimation value as the final current positioning information of the robot.
Preferably, in step S4, a specific method for obtaining the current position information and the current posture information of the robot is as follows:
s41: acquiring frame data of the image at different moments by using an optical flow sensor vertically installed downwards to obtain the moving speed of the pixel;
s42: the moving speed of the pixels is converted into speed information and attitude information of the motion of the robot.
Preferably, the specific implementation method of step S42 is:
based on the transformation relation between the offline two-dimensional grid map coordinate system and the global coordinate system and the transformation relation between the pixel coordinate system and the optical flow sensor coordinate system, the method comprises the following steps:
Figure 100002_DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE004
is a transformation matrix of an off-line two-dimensional grid map coordinate system and an optical flow sensor coordinate system,
Figure 100002_DEST_PATH_IMAGE006
and
Figure 100002_DEST_PATH_IMAGE008
the variation of the robot in the off-line two-dimensional grid map coordinate system,
Figure 100002_DEST_PATH_IMAGE010
and
Figure 100002_DEST_PATH_IMAGE012
is the value obtained by dividing the focal length f of the camera by dx and dy, which are the actual physical sizes of the pixels on the photosensitive chip,
Figure 100002_DEST_PATH_IMAGE014
Figure 100002_DEST_PATH_IMAGE016
is the variation in the pixel coordinate system,
Figure 100002_DEST_PATH_IMAGE018
Figure 100002_DEST_PATH_IMAGE020
is the origin in the pixel coordinate system,
Figure 100002_DEST_PATH_IMAGE022
and dividing the variation under the offline two-dimensional grid map coordinate system by the corresponding variation time to obtain the speed information of the robot motion for the coordinates under the optical flow sensor coordinate system.
Preferably, in step S5, a robot positioning matching based on single line laser radar data under an offline two-dimensional grid map is implemented by using an adaptive monte carlo positioning method.
A multi-sensor fused robotic positioning system, comprising: the map building module and the real-time positioning module; the map building module generates an off-line two-dimensional raster map according to the inertia measurement data, the mileage data and the two-dimensional point cloud data; adopting closed-loop constraint correction pose and map information in the data acquisition process of the off-line two-dimensional grid map; the real-time positioning module: the system is used for acquiring output information of the inertial measurement unit, the odometer, the single-wire laser radar, the optical flow sensor and the differential GPS.
Preferably, the real-time positioning module includes: the initialization positioning unit is used for acquiring initialization pose information after the robot is started; the data receiving and preprocessing unit is used for acquiring output information of the inertial measurement unit, the odometer, the single-line laser radar, the optical flow sensor and the differential GPS, processing data obtained by the optical flow sensor, calculating speed information and attitude information of robot motion, and converting robot position information under a global coordinate system obtained by the differential GPS into a map coordinate system; the robot pose prediction value calculation unit is used for predicting the prediction values of the position information, the pose information and the speed information of the current robot by adopting an omnidirectional motion model; the robot attitude and position observation value calculation unit is used for matching data of the single-line laser radar with an offline two-dimensional grid map in real time to obtain current position information and attitude information of the robot; and the extended Kalman filtering unit is used for fusing output information of the inertial measurement unit, the odometer, the single-line laser radar, the optical flow sensor and the differential GPS by adopting extended Kalman filtering based on the predicted value of the state information of the robot to obtain an estimated value of the current state information of the robot, and the estimated value is used as the current final positioning information of the robot.
A multi-sensor fused robotic positioning device, comprising: the inertia measurement unit is used for acquiring inertia measurement data of the robot; the odometer is used for acquiring the mileage data of the robot; the single-line laser radar is used for acquiring two-dimensional point cloud data of the surrounding environment of the robot platform; the optical flow sensor is used for calculating the optical flow change of the pixels to obtain the displacement change data of the robot in two directions on a plane; and the differential GPS acquires the position relation of the robot in the global coordinate system.
The invention has the beneficial effects that: according to the robot positioning method and system, the positioning sensor comprising the inertia measurement unit, the odometer, the single-line laser radar, the optical flow sensor and the differential GPS is arranged, and the robot is subjected to comprehensive data acquisition, so that the low cost under the premise of considering the positioning accuracy is realized, and the low cost comprises low hardware cost and low storage and calculation cost; the device has high reliability, and can still provide stable and accurate positioning result output under the condition that a single sensor fails; the application range is wide, and the indoor and outdoor normal operation can be realized.
Drawings
Fig. 1 is a flowchart illustrating a robot positioning method according to an embodiment of the present invention.
Fig. 2 is a block diagram of a robot positioning system according to an embodiment of the present invention.
In the figure, the system comprises a map building module 1, a real-time positioning module 2, a data receiving and preprocessing unit 3, an initialization positioning unit 4, a robot pose predicted value calculating unit 5, a robot pose observed value calculating unit 6 and an extended Kalman filtering unit 7.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the embodiments of the present invention are further described in detail by the following embodiments in conjunction with the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example (b): a multi-sensor fusion robot positioning method, as shown in fig. 1, includes the following steps:
s1: establishing an off-line two-dimensional grid map through data information of an inertia measurement unit, a speedometer and a single-line laser radar; the specific implementation is as follows: the method comprises the steps of obtaining inertia measurement data of the robot through an inertia measurement unit, obtaining mileage data of the robot through a milemeter, obtaining two-dimensional point cloud data of the surrounding environment of a robot platform through a single-line laser radar, and generating an off-line two-dimensional grid map. In the process of collecting the map, a closed-loop data collection mode is adopted, and the closed-loop data collection mode is used for correcting the pose and the map information by adopting closed-loop constraint in the process of off-line two-dimensional grid map; the generated off-line two-dimensional grid map comprises map post-processing operations such as grid processing, spatial domain filtering, noise point removal and the like.
S2: establishing a transformation relation between an offline two-dimensional grid map coordinate system and a global coordinate system; the transformation matrix of the two coordinate systems is obtained by calibration
Figure DEST_PATH_IMAGE024
(m denotes an off-line two-dimensional grid map coordinate system and w denotes a global coordinate system).
S3: setting initialization state information of the robot, and acquiring output information of a single-line laser radar, an optical flow sensor and a differential GPS; the initialized state information of the robot can be directly obtained when the robot is started, and the output frequencies of the inertial measurement unit, the odometer, the single-line laser radar, the optical flow sensor and the differential GPS are respectively 100hz, 10hz, 60hz and 1hz.
S4: calculating to obtain the speed information and the attitude information of the motion of the robot according to the output information of the optical flow sensor; the specific method for acquiring the current position information and the attitude information of the robot comprises the following steps:
s41: acquiring frame data of the image at different moments by using an optical flow sensor vertically installed downwards to obtain the moving speed of the pixel;
s42: converting the moving speed of the pixels into speed information and posture information of the motion of the robot;
the specific implementation method of step S42 is:
the transformation formula from the pixel coordinate system to the optical flow sensor coordinate system is as follows:
Figure DEST_PATH_IMAGE026
(1)
where u and v are coordinates in a pixel coordinate system,
Figure 371838DEST_PATH_IMAGE010
and
Figure 757820DEST_PATH_IMAGE012
the value obtained by dividing the focal length f of the camera by dx and dy, which are the actual physical sizes of the pixels on the photosensitive chip (c), (d) and (d
Figure DEST_PATH_IMAGE028
,
Figure DEST_PATH_IMAGE030
,
Figure 865453DEST_PATH_IMAGE022
) Coordinates in the optical flow sensor coordinate system.
Converting equation (1) to obtain:
Figure DEST_PATH_IMAGE032
(2)
the differentiation operation on equation (2) is:
Figure DEST_PATH_IMAGE034
(3)
setting a transformation matrix from a map coordinate system to an optical flow sensor coordinate system as
Figure 667187DEST_PATH_IMAGE004
And then, the variation of the robot in the map coordinate system is as follows:
Figure 950401DEST_PATH_IMAGE002
(4)
based on the formula (4), the variation of the position posture of the optical flow sensor in the map coordinate system can be obtained,
Figure 117071DEST_PATH_IMAGE006
and
Figure 852946DEST_PATH_IMAGE008
the variation of the robot in the off-line two-dimensional grid map coordinate system,
Figure 645321DEST_PATH_IMAGE014
Figure 783042DEST_PATH_IMAGE016
is the variation in the pixel coordinate system,
Figure 855034DEST_PATH_IMAGE018
Figure 78205DEST_PATH_IMAGE020
the variation is divided by the corresponding variation time to obtain the motion speed information as the origin in the pixel coordinate system.
S5: matching the data of the single-line laser radar with an offline two-dimensional grid map to obtain the current position information and the current attitude information of the robot; and the robot positioning matching based on the single-line laser radar data under the offline two-dimensional grid map is realized by adopting a self-adaptive Monte Carlo positioning method, or the positioning matching of the robot is carried out by adopting a positioning method of KLD sampling.
S6: and converting the robot position information under the global coordinate system obtained by the differential GPS in real time into an off-line two-dimensional grid map coordinate system through the conversion relation calibrated in the step S2.
S7: and predicting the predicted values of the position information, the attitude information and the speed information of the current robot by adopting the omnidirectional motion model.
S8: based on the predicted value of the robot state information in the step S7, obtaining the optimal estimation value of the current state information of the robot by adopting an observation value obtained by fusing an extended Kalman filter with a positioning sensor, and taking the estimation value as the final current positioning information of the robot; the observation values here include measurement values directly or indirectly from an inertial measurement unit, a speedometer, a single line laser radar, an optical flow sensor, a differential GPS.
As shown in fig. 2, the present invention further provides a multi-sensor fused robot positioning system, which includes a map building module 1 and a real-time positioning module 2.
The map building module is used for acquiring inertia measurement data of the robot through the inertia measurement unit, acquiring mileage data of the robot through the odometer, acquiring two-dimensional point cloud data of the surrounding environment of the robot platform through the single-line laser radar and generating an off-line two-dimensional grid map. And in the process of collecting the map, a closed-loop data collection mode is adopted, and the method is used for correcting the pose and map information by adopting closed-loop constraint in the process of off-line two-dimensional grid map.
The real-time positioning module is used for acquiring output information of the inertial measurement unit, the odometer, the single-wire laser radar, the optical flow sensor and the differential GPS. And processing data obtained by the optical flow sensor in real time, and calculating to obtain the speed information and the attitude information of the motion of the robot. And matching the data of the single-line laser radar with the offline two-dimensional grid map in real time to obtain the current position information and the current attitude information of the robot. And converting the position information of the robot under the global coordinate system obtained by the differential GPS into the map coordinate system through the calibrated transformation relation in real time. And predicting the predicted values of the position information, the attitude information and the speed information of the current robot by adopting the omnidirectional motion model. And based on the predicted value of the state information of the robot, obtaining the optimal estimation of the current state information of the robot by adopting the observation values obtained by fusing each sensor through extended Kalman filtering, and taking the estimation value as the final current positioning information of the robot.
The positioning module comprises an initialization positioning unit 3, a data receiving and preprocessing unit 4, a robot pose predicted value calculating unit 5, a robot pose observed value calculating unit 6 and an extended Kalman filtering unit 7.
The initialization positioning unit is used for acquiring initialization pose information after the robot is started.
And the data receiving and preprocessing unit is used for acquiring output information of the inertial measurement unit, the odometer, the single-line laser radar, the optical flow sensor and the differential GPS, processing data obtained by the optical flow sensor in real time, and calculating to obtain speed information and attitude information of the robot. And converting the position information of the robot under the global coordinate system obtained by the differential GPS into a map coordinate system in real time.
The robot pose predicted value calculating unit is used for predicting predicted values of position information, attitude information and speed information of the current robot by adopting the omnidirectional motion model.
And the robot pose observation value calculation unit is used for matching the data of the single-line laser radar with the off-line two-dimensional grid map in real time to obtain the current position information and the current attitude information of the robot.
And the extended Kalman filtering unit is used for fusing the observed values obtained by the sensors by adopting extended Kalman filtering based on the predicted value of the state information of the robot to obtain the optimal estimation value of the current state information of the robot, and taking the estimation value as the final current positioning information of the robot.
The invention also provides a multi-sensor fused robot positioning device, which comprises an inertial measurement unit, a odometer, a single-line laser radar, an optical flow sensor and a differential GPS, wherein the inertial measurement unit is used for acquiring inertial measurement data of the robot; the odometer is used for acquiring the mileage data of the robot; the single-line laser radar is used for acquiring two-dimensional point cloud data of the surrounding environment of the robot platform; the optical flow sensor is used for calculating the optical flow change of the pixels to obtain the displacement change data of the robot in two directions on a plane; and the differential GPS acquires the position relation of the robot in the global coordinate system.
The inertial measurement unit, the odometer, the single-line laser radar, the optical flow sensor and the differential GPS are all connected with the processor, the processor comprises a map building module and a real-time positioning module of the robot positioning system, and the processor generates an off-line two-dimensional grid map based on data of the inertial measurement unit, the odometer and the single-line laser radar. And generating pose information of the current robot according to inertial measurement unit data, odometer data, a result of matching the single-line laser radar real-time two-dimensional point cloud data with the offline two-dimensional grid map, data of the optical flow sensor and data of the differential GPS device.
The system is also provided with a memory for storing an off-line two-dimensional grid map, the measurement data of each sensor, a robot state predicted value, a robot state observed value and the current positioning information of the robot.
The optical flow sensor is arranged to collect the speed information and the attitude information of the robot, the fusion positioning of the multiple sensors is carried out by combining the other positioning sensors, when other positioning sensors are in fault, the positioning of the robot can still be realized by the operation of the optical flow sensor, meanwhile, the optical flow sensor can be well adapted to the influence of environmental changes such as light, seasons and the like, the precision reduction caused by the light factors can not be caused, and the hardware cost, the software cost and the storage cost of the robot positioning can be reduced on the premise of ensuring the precision.
The above-described embodiments are only preferred embodiments of the present invention, and are not intended to limit the present invention in any way, and other variations and modifications may be made without departing from the spirit of the invention as set forth in the claims.

Claims (7)

1. A multi-sensor fused robot positioning method is characterized by comprising the following steps:
s1: establishing an off-line two-dimensional grid map through data information of an inertia measurement unit, a speedometer and a single-line laser radar;
s2: establishing a transformation relation between an offline two-dimensional grid map coordinate system and a global coordinate system;
s3: setting initialization state information of the robot, and acquiring output information of a single-line laser radar, an optical flow sensor and a differential GPS;
s4: calculating to obtain the speed information and the attitude information of the motion of the robot according to the output information of the optical flow sensor;
s5: matching the data of the single-line laser radar with an offline two-dimensional grid map to obtain the current position information and the current attitude information of the robot;
s6: converting the robot position information under the global coordinate system obtained by the differential GPS into an off-line two-dimensional grid map coordinate system through the conversion relation calibrated in the step S2;
s7: predicting the predicted values of the position information, the attitude information and the speed information of the current robot by adopting an omnidirectional motion model;
s8: and based on the predicted value of the robot state information in the step S7, obtaining the optimal estimation value of the current state information of the robot by adopting the observation value obtained by fusing the extended Kalman filtering and the positioning sensor, and taking the estimation value as the final current positioning information of the robot.
2. The multi-sensor fused robot positioning method according to claim 1,
in step S4, a specific method for acquiring the current position information and the current posture information of the robot is as follows:
s41: acquiring frame data of the image at different moments by using an optical flow sensor vertically installed downwards to obtain the moving speed of the pixel;
s42: the moving speed of the pixels is converted into speed information and attitude information of the motion of the robot.
3. The multi-sensor fused robot positioning method according to claim 2,
the specific implementation method of step S42 is:
based on the transformation relation between the offline two-dimensional grid map coordinate system and the global coordinate system and the transformation relation between the pixel coordinate system and the optical flow sensor coordinate system, the method comprises the following steps:
Figure DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE004
is a transformation matrix of an off-line two-dimensional grid map coordinate system and an optical flow sensor coordinate system,
Figure DEST_PATH_IMAGE006
and
Figure DEST_PATH_IMAGE008
the variation of the robot in the off-line two-dimensional grid map coordinate system,
Figure DEST_PATH_IMAGE010
and
Figure DEST_PATH_IMAGE012
is the value obtained by dividing the focal length f of the camera by dx and dy, which are the actual physical sizes of the pixels on the photosensitive chip,
Figure DEST_PATH_IMAGE014
Figure DEST_PATH_IMAGE016
is the variation in the pixel coordinate system,
Figure DEST_PATH_IMAGE018
Figure DEST_PATH_IMAGE020
is the origin in the pixel coordinate system,
Figure DEST_PATH_IMAGE022
and dividing the variation of the offline two-dimensional grid map coordinate system by the corresponding variation time to obtain the speed information of the robot motion.
4. A multi-sensor fused robot positioning method according to claim 1, 2 or 3,
in the step S5, a self-adaptive Monte Carlo positioning method is adopted to realize robot positioning matching based on single line laser radar data under an off-line two-dimensional grid map.
5. A multi-sensor fusion robot positioning system adapted to a multi-sensor fusion robot positioning method according to any one of claims 1 to 4, comprising:
the map building module and the real-time positioning module;
the map construction module generates an off-line two-dimensional raster map according to the inertia measurement data, the mileage data and the two-dimensional point cloud data; adopting closed-loop constraint correction pose and map information in the data acquisition process of the off-line two-dimensional grid map;
the real-time positioning module: the system is used for acquiring output information of the inertial measurement unit, the odometer, the single-wire laser radar, the optical flow sensor and the differential GPS.
6. The multi-sensor fused robot positioning system of claim 5,
the real-time positioning module comprises:
the initialization positioning unit is used for acquiring initialization pose information after the robot is started;
the data receiving and preprocessing unit is used for acquiring output information of the inertial measurement unit, the odometer, the single-line laser radar, the optical flow sensor and the differential GPS, processing data obtained by the optical flow sensor, calculating speed information and attitude information of robot motion, and converting robot position information under a global coordinate system obtained by the differential GPS into a map coordinate system;
the robot pose prediction value calculation unit is used for predicting the prediction values of the position information, the pose information and the speed information of the current robot by adopting an omnidirectional motion model;
the robot position and attitude observation value calculation unit is used for matching the data of the single-line laser radar with an off-line two-dimensional grid map in real time to obtain the current position information and attitude information of the robot;
and the extended Kalman filtering unit is used for fusing output information of the inertial measurement unit, the odometer, the single-line laser radar, the optical flow sensor and the differential GPS by adopting extended Kalman filtering based on the predicted value of the state information of the robot to obtain an estimated value of the current state information of the robot, and the estimated value is used as the current final positioning information of the robot.
7. A multi-sensor fusion robot positioning device applied to the multi-sensor fusion robot positioning system according to claim 5, comprising:
the inertia measurement unit is used for acquiring inertia measurement data of the robot;
the odometer is used for acquiring the mileage data of the robot;
the single-line laser radar is used for acquiring two-dimensional point cloud data of the surrounding environment of the robot platform;
the optical flow sensor is used for calculating the optical flow change of the pixels to obtain the displacement change data of the robot in two directions on a plane;
and the differential GPS acquires the position relation of the robot in the global coordinate system.
CN202211219785.2A 2022-10-08 2022-10-08 Multi-sensor fusion robot positioning method, system and device Active CN115307646B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211219785.2A CN115307646B (en) 2022-10-08 2022-10-08 Multi-sensor fusion robot positioning method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211219785.2A CN115307646B (en) 2022-10-08 2022-10-08 Multi-sensor fusion robot positioning method, system and device

Publications (2)

Publication Number Publication Date
CN115307646A true CN115307646A (en) 2022-11-08
CN115307646B CN115307646B (en) 2023-03-24

Family

ID=83866042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211219785.2A Active CN115307646B (en) 2022-10-08 2022-10-08 Multi-sensor fusion robot positioning method, system and device

Country Status (1)

Country Link
CN (1) CN115307646B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115655302A (en) * 2022-12-08 2023-01-31 安徽蔚来智驾科技有限公司 Laser odometer implementation method, computer equipment, storage medium and vehicle

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107450577A (en) * 2017-07-25 2017-12-08 天津大学 UAV Intelligent sensory perceptual system and method based on multisensor
CN108181636A (en) * 2018-01-12 2018-06-19 中国矿业大学 Petrochemical factory's crusing robot environmental modeling and map structuring device and method
CN108680156A (en) * 2018-02-26 2018-10-19 北京克路德人工智能科技有限公司 Robot positioning method for multi-sensor data fusion
KR20190041315A (en) * 2017-10-12 2019-04-22 한화디펜스 주식회사 Inertial-based navigation device and Inertia-based navigation method based on relative preintegration
CN110873883A (en) * 2019-11-29 2020-03-10 上海有个机器人有限公司 Positioning method, medium, terminal and device integrating laser radar and IMU
CN111077907A (en) * 2019-12-30 2020-04-28 哈尔滨理工大学 Autonomous positioning method of outdoor unmanned aerial vehicle
CN111089585A (en) * 2019-12-30 2020-05-01 哈尔滨理工大学 Mapping and positioning method based on sensor information fusion
US20200309529A1 (en) * 2019-03-29 2020-10-01 Trimble Inc. Slam assisted ins
CN113253297A (en) * 2021-06-21 2021-08-13 中国人民解放军国防科技大学 Map construction method and device integrating laser radar and depth camera
WO2022142992A1 (en) * 2020-12-29 2022-07-07 深圳市普渡科技有限公司 Fusion positioning method and apparatus, device and computer-readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107450577A (en) * 2017-07-25 2017-12-08 天津大学 UAV Intelligent sensory perceptual system and method based on multisensor
KR20190041315A (en) * 2017-10-12 2019-04-22 한화디펜스 주식회사 Inertial-based navigation device and Inertia-based navigation method based on relative preintegration
CN108181636A (en) * 2018-01-12 2018-06-19 中国矿业大学 Petrochemical factory's crusing robot environmental modeling and map structuring device and method
CN108680156A (en) * 2018-02-26 2018-10-19 北京克路德人工智能科技有限公司 Robot positioning method for multi-sensor data fusion
US20200309529A1 (en) * 2019-03-29 2020-10-01 Trimble Inc. Slam assisted ins
CN110873883A (en) * 2019-11-29 2020-03-10 上海有个机器人有限公司 Positioning method, medium, terminal and device integrating laser radar and IMU
CN111077907A (en) * 2019-12-30 2020-04-28 哈尔滨理工大学 Autonomous positioning method of outdoor unmanned aerial vehicle
CN111089585A (en) * 2019-12-30 2020-05-01 哈尔滨理工大学 Mapping and positioning method based on sensor information fusion
WO2022142992A1 (en) * 2020-12-29 2022-07-07 深圳市普渡科技有限公司 Fusion positioning method and apparatus, device and computer-readable storage medium
CN113253297A (en) * 2021-06-21 2021-08-13 中国人民解放军国防科技大学 Map construction method and device integrating laser radar and depth camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周怡: ""基于多传感器信息融合的移动机器人定位方法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
李巍: ""基于多传感器融合的室内移动机器人定位算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115655302A (en) * 2022-12-08 2023-01-31 安徽蔚来智驾科技有限公司 Laser odometer implementation method, computer equipment, storage medium and vehicle

Also Published As

Publication number Publication date
CN115307646B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN110243358B (en) Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
CN108181636B (en) Environment modeling and map building device and method for petrochemical plant inspection robot
CN109100730B (en) Multi-vehicle cooperative rapid map building method
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
Alonso et al. Accurate global localization using visual odometry and digital maps on urban environments
CN112734765A (en) Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
CN110751123B (en) Monocular vision inertial odometer system and method
CN112967392A (en) Large-scale park mapping and positioning method based on multi-sensor contact
CN111623773B (en) Target positioning method and device based on fisheye vision and inertial measurement
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN115307646B (en) Multi-sensor fusion robot positioning method, system and device
CN113238554A (en) Indoor navigation method and system based on SLAM technology integrating laser and vision
CN115585818A (en) Map construction method and device, electronic equipment and storage medium
Karam et al. Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping
CN108646760B (en) Monocular vision based mobile robot target tracking and platform control system and method
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN113701750A (en) Fusion positioning system of underground multi-sensor
CN105807083A (en) Real-time speed measuring method and system for unmanned aerial vehicle
CN109903309B (en) Robot motion information estimation method based on angular optical flow method
Wang et al. Micro aerial vehicle navigation with visual-inertial integration aided by structured light
CN111696155A (en) Monocular vision-based multi-sensing fusion robot positioning method
CN112747752B (en) Vehicle positioning method, device, equipment and storage medium based on laser odometer
CN114529585A (en) Mobile equipment autonomous positioning method based on depth vision and inertial measurement
Yang Research on key technologies of UAV navigation and positioning system
Liu et al. A multi-sensor fusion with automatic vision-LiDAR calibration based on Factor graph joint optimization for SLAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240409

Address after: Room 509-2, 5th Floor, Building 1, Dingchuang Wealth Center, Cangqian Street, Yuhang District, Hangzhou City, Zhejiang Province, 311100

Patentee after: Hangzhou Yixun Technology Co.,Ltd.

Country or region after: China

Address before: Room 303-5, block B, building 1, 268 Shiniu Road, nanmingshan street, Liandu District, Lishui City, Zhejiang Province 323000

Patentee before: Zhejiang Guangpo Intelligent Technology Co.,Ltd.

Country or region before: China