CN114689035A - Long-range farmland map construction method and system based on multi-sensor fusion - Google Patents
Long-range farmland map construction method and system based on multi-sensor fusion Download PDFInfo
- Publication number
- CN114689035A CN114689035A CN202210307473.0A CN202210307473A CN114689035A CN 114689035 A CN114689035 A CN 114689035A CN 202210307473 A CN202210307473 A CN 202210307473A CN 114689035 A CN114689035 A CN 114689035A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- robot
- time
- matrix
- cloud data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010276 construction Methods 0.000 title claims abstract description 40
- 230000004927 fusion Effects 0.000 title claims abstract description 38
- 238000013507 mapping Methods 0.000 claims abstract description 33
- 230000000007 visual effect Effects 0.000 claims abstract description 23
- 239000011159 matrix material Substances 0.000 claims description 91
- 238000000034 method Methods 0.000 claims description 45
- 239000013598 vector Substances 0.000 claims description 38
- 230000008569 process Effects 0.000 claims description 16
- 230000007704 transition Effects 0.000 claims description 16
- 238000005259 measurement Methods 0.000 claims description 14
- 230000011218 segmentation Effects 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 6
- 238000003860 storage Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 23
- 238000001514 detection method Methods 0.000 description 16
- 230000000694 effects Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 238000005457 optimization Methods 0.000 description 5
- 238000012271 agricultural production Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3807—Creation or updating of map data characterised by the type of data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3841—Data obtained from two or more sources, e.g. probe vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03H—IMPEDANCE NETWORKS, e.g. RESONANT CIRCUITS; RESONATORS
- H03H17/00—Networks using digital techniques
- H03H17/02—Frequency selective networks
- H03H17/0248—Filters characterised by a particular frequency response or filtering method
- H03H17/0255—Filters based on statistics
- H03H17/0257—KALMAN filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Automation & Control Theory (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Probability & Statistics with Applications (AREA)
- Mathematical Physics (AREA)
- Computer Hardware Design (AREA)
- Optics & Photonics (AREA)
- Navigation (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
A long-range farmland map construction method and a system based on multi-sensor fusion comprise the following steps: the robot continuously acquires positioning data through a GPS module, and updates the positioning data by using an inertial sensor of the robot in a time period between the two frames of positioning data so as to obtain a state variable containing the real-time position and the course of the robot; the robot scans external environment information in real time through a laser radar of the robot and converts the external environment information into current point cloud data, meanwhile, the robot scans RGB information of the external environment in real time through a visual sensor, the RGB information is compared with historical RGB information collected in history, historical point cloud data corresponding to the historical RGB information with similarity larger than a threshold value and the current point cloud data are fed back to a mapping module of the laser radar, and enhanced point cloud data under current state variables are obtained; after the robot finishes walking the preset route in the farmland map construction task, point cloud data under each state variable are fused to obtain a point cloud map, and the point cloud map is used as an execution result of the farmland map construction task.
Description
Technical Field
The invention relates to the technical field of multi-sensor fusion, instant positioning, mapping and computer vision, in particular to a long-range farmland map construction method and a long-range farmland map construction system based on multi-sensor fusion.
Background
In the whole agricultural production chain, the management of the operation in the middle and later periods of agricultural production occupies an important position. In the operation management process, the traditional management mode needs a large amount of labor force and is more time-consuming, and the agricultural development is also greatly restricted. On the other hand, the cost of agricultural production is increased due to the reasons that labor force input in later management of agriculture is large, time consumption is serious, management efficiency is low and the like. Therefore, the use of unmanned agricultural machinery (intelligent agricultural machinery for short) for agricultural operation management is a popular research. However, in the operation process of the intelligent agricultural machinery, the environment perception technology is more important.
The driving environment of the intelligent agricultural machinery is sensed by a Simultaneous Localization and Mapping (SLAM) technology. SLAM is an instant positioning and map construction technology, and the technology mainly has the following functions: the sensor senses the external environment and collects the environmental information to generate a full scene map, and the method is a technology for realizing self-positioning of the intelligent agricultural machinery in an unknown environment. SLAM technology was first proposed by Smith, Self and Cheeseman and applied to autonomous navigation technology by many scholars due to its important theoretical and practical value. At present, SLAM technology has become a key technology for intelligent agricultural machinery control and development. And the laser radar sensor taking the SLAM technology as the core is one of the important sensors for intelligent agricultural machinery autonomous navigation. The laser radar is less influenced by weather factors such as illumination, wind and the like in the outdoor field environment, and the laser radar also has the advantages of 360-degree all-dimensional sensing and the like. The laser radar sensor has better ability of sensing surrounding environment in actual farmland environment, and has important research significance for improving the walking precision of autonomous navigation.
The most basic technical requirements of unmanned driving of intelligent agricultural machinery in an agricultural scene are as follows: and (5) constructing a full scene map. The precision of the full scene map construction directly influences the precision of the path driving of the intelligent agricultural machinery at the later stage. In addition, the unmanned positioning accuracy in the farmland scene is required to be centimeter-level accuracy. However, in the long-range mapping process of the laser radar, the sensed external environment information is variable in real time, and in the mapping process, because the SLAM technology of a single laser radar sensor is based on distance to perform loop detection, visual information for sensing the external environment is lacked, the loop detection efficiency is low, and the mapping precision is greatly reduced. Therefore, the drawing is built by only a single laser radar sensor, and the practical application requirements are difficult to meet. Secondly, the mapping process requires continuous positioning signals and positioning frequency of more than 10Hz, and the traditional GPS positioning is difficult to meet the actual requirement.
Disclosure of Invention
The invention aims to solve (overcome) the problem of low long-range map building precision in the prior art, and provides a long-range accurate full-scene map building method based on multi-sensor fusion. The method comprises the following key points 1, and a fusion technology of a vision sensor and a laser radar sensor: because the loop detection model based on the laser radar sensor is based on distance judgment, the lack of visual information sensed by the external environment can cause low loop detection efficiency. Therefore, on the basis of SLAM mapping by a single laser radar sensor, a vision sensor is introduced to scan RGB information of an external environment in real time, and the detected information is fed back to an SLAM module of the laser radar in real time through a vision bag-of-words loop detection technology, so that a perceived environment map is further compensated and optimized in real time; the technical effects are as follows: in the long-voyage map building process, the compensation technology of the visual sensor for loop detection provided by the invention enables the map building precision to be obviously improved, and the effect is more obvious particularly in map turning and edge parts.
Key point 2, IMU and GPS fusion technology: and before new GPS data is obtained, the last frame of GPS data and the current IMU data are input into a Kalman filter, and the running position of the mapping robot is predicted immediately. Until the new GPS data is received, then the prediction of the position is continued by taking the current GPS data as the basis; the technical effects are as follows: when the GPS signal is lost temporarily or the GPS positioning frequency is suddenly reduced, the algorithm provided by the invention compensates the GPS data in real time, thereby meeting the requirement of map building precision.
Aiming at the defects of the prior art, the invention provides a long-range farmland map construction method based on multi-sensor fusion, which comprises the following steps:
step 1, after a farmland map construction task is started, a robot continuously acquires positioning data through a GPS module of the robot, and updates the positioning data by using an inertial sensor of the robot in a time period between two frames of positioning data so as to obtain a state variable containing a real-time position and a course of the robot;
step 2, the robot scans external environment information in real time through a laser radar of the robot and converts the external environment information into current point cloud data, meanwhile, the robot scans RGB information of the external environment in real time through a visual sensor, the RGB information is compared with historical RGB information collected historically, historical point cloud data and current point cloud data corresponding to the historical RGB information with similarity larger than a threshold value are fed back to a mapping module of the laser radar, and enhanced point cloud data under the current state variable are obtained;
and 3, repeating the step 1 and the step 2 until the robot finishes a preset route in the farmland map construction task, fusing the current point cloud data or the enhanced point cloud data under each state variable to obtain a point cloud map, and converting the point cloud map into a two-dimensional map as an execution result of the farmland map construction task.
The long-range farmland map construction method based on multi-sensor fusion comprises the following steps of 1:
step 11, selecting a state variable as a vehicle pose X by a Kalman filter in the roboti=(xi yi ψi) (ii) a Let tkEstimated state x of timeiReceived system noise Gi-1Driving, expressed in equation form as:
Xi=Ai,i-1Xi-1+Wi-1Gi-1 (1)
in the formula, Ai,i-1Is ti-1To tiOne step transition matrix of time, Wi-1Driving a matrix for system noise;
in GPS positioning system, for XiThe measurement of (a) satisfies a linear relationship, and the measurement equation is:
Zi=HiXi+Vi (2)
in the formula, HiFor measuring the matrix, ViFor measuring noise sequences, ZiThe positioning data at the ith moment;
step 12, setting QiIs a variance matrix of the system noise sequence, RiMeasuring a variance matrix of the noise sequence, wherein both the variance matrix and the non-negative matrix are non-negative matrixes; when the Kalman filter is initialized, the position coordinates and the course of the first frame GPS are taken as the initial value X of the state of the filtero=(xo yo ψo) Initial estimated mean square error matrix:
wherein ed is distance estimation noise and ey is heading estimation noise; when the velocity information of the inertial sensor and the angular velocity information of the gyroscope are updated, a prediction solution is obtained by dead reckoning, and a system noise variance matrix Qi-1Comprises the following steps:
where er is wheel speed noise and ei is angular velocity noise;
calculating a one-step transition matrix A from the time i-1 to the time i in the time updating processi,i-1:
The system noise driving matrix W in the time update process is calculated by equations (6) and (7)i-1When the pose is smaller between the current time and the later time, namely w × dt is less than or equal to 0.0001:
when the rotation angle between the current and later time poses is large, namely w dt is greater than 0.0001:
where td is the distance the vehicle is advancing in the time period, tr is the angle the vehicle has turned in the time period:
td=v*dt (8)
tr=w*dt (9)
When the GPS positioning data is updated, fusing a GPS positioning result with a current dead reckoning result;
firstly, the estimated noise parameter is obtained according to the current GPS working state, and the measured noise variance matrix R of the system is calculated by the formula (3)i(ii) a At this time, the measured value Xi=(xi yi ψi) Namely the positioning coordinate and the course obtained by the GPS at the moment;
calculating a filter gain K:
calculating a state estimate from equations (14) and (15), whereinIs a prediction result obtained by dead reckoning.
Finally, the mean square error matrix P is updated, where I is the identity matrix of 3 x 3:
the state variable X of the robot is finally output in conjunction with equations (1) to (16)i。
The long-range farmland map construction method based on multi-sensor fusion, wherein the step 2 comprises the following steps:
step 21, projecting the received point cloud data of one frame to a distance image, wherein each point in the distance image represents one pixel of the image, and the value d of the pixeliTarget point DiThe Euclidean distance to the robot lidar; applying an image-based segmentation method to the range image to group points into clusters, recording class labels of points in the range image, coordinates (x, y) in the range image, values d of the pointsi;
Step 22, divide the distance image equally into a plurality of sub-images, sort the points in each line of the sub-images based on the smoothness C of the sub-images, and set the threshold value CthJudging that each point belongs to a plane point or an angular point;
step 23, the laser radar odometer module finds the conversion between two scans by executing the scanning matching of the plane point and the angular point;
step 24, registering the characteristic points with the surrounding point cloud pictures to further optimize posture transformation but operate at a lower frequency; the LeGO-LOAM option stores every feature point set, rather than a single point cloud.
The long-range farmland map construction method based on multi-sensor fusion, wherein the step 2 comprises the following steps:
converting the current RGB information into bag-of-words vector BRIEF descriptor B (p) by the following formula, and then corresponding the descriptor to corresponding point cloud data through a time stamp to be used as the bag-of-words vector of the current point cloud data;
in the formula, Bi(p) is the ith bit of the descriptor; i () is the intensity of the pixel in the smoothed image; a isiAnd biA two-dimensional offset of the ith test point relative to the patch center;
converting the descriptor B (p) to the bag-of-words space to form a bag-of-words vector V describing the current visual imagetAnd measuring similarity score s (v) of two bag-of-word vectors in the bag-of-word space1,v2) Wherein
Normalizing the scores to a vector VtThe best score that is desired to be obtained in this sequence, yields the normalized similarity score η:
in the formula, vt-ΔtA word bag vector representing a previous image;
if the eta is larger than the threshold value, the historical point cloud data and the current point cloud data corresponding to the historical RGB information with the similarity larger than the threshold value are fed back to the image building module of the laser radar.
The invention also provides a long-range farmland map construction system based on multi-sensor fusion, which comprises the following steps:
the robot continuously acquires positioning data through a GPS module of the robot, and updates the positioning data by using an inertial sensor of the robot in a time period between the two frames of positioning data so as to obtain a state variable containing the real-time position and the course of the robot;
the scanning module is used for scanning the external environment information in real time through a laser radar of the robot and converting the external environment information into current point cloud data, meanwhile, the robot scans the RGB information of the external environment in real time through a visual sensor, compares the RGB information with historical RGB information collected historically, and feeds the historical point cloud data and the current point cloud data corresponding to the historical RGB information with similarity greater than a threshold value back to the image building module of the laser radar to obtain enhanced point cloud data under the current state variable;
and the mapping module is used for fusing the current point cloud data or the enhanced point cloud data under each state variable to obtain a point cloud map after the robot finishes a preset route in the farmland map construction task, and converting the point cloud map into a two-dimensional map as an execution result of the farmland map construction task.
The long-range farmland map building system based on multi-sensor fusion comprises an initial module and a second module, wherein the initial module comprises:
a module 11, configured to invoke the kalman filter in the robot to select a state variable as the vehicle pose Xi=(xi yiωi) (ii) a Let tkEstimated state x of timeiReceived system noise Gi-1Driving, expressed in equation form as:
Xi=Ai,i-1Xi-1+Wi-1Gi-1 (1)
in the formula, Ai,i-1Is ti-1To tiOne step transition matrix of time of day, Wi-1Driving a matrix for system noise;
in GPS positioning system, for XiThe measurement of (a) satisfies a linear relationship, and the measurement equation is:
Zi=HiXi+Vi (2)
in the formula, HiFor measuring the matrix, ViFor measuring noise sequences, ZiThe positioning data at the ith moment;
module 12 for setting QiFor systematic noise sequencesArray of variances, RiMeasuring a variance matrix of the noise sequence, wherein both the variance matrix and the non-negative matrix are non-negative matrixes; when the Kalman filter is initialized, the position coordinates and the course of the first frame GPS are taken as the initial value X of the state of the filtero=(xo yo ψo) Initial estimated mean square error matrix:
wherein ed is distance estimation noise and ey is heading estimation noise; when the speed information of the inertial sensor and the angular speed information of the gyroscope are updated, a prediction solution is obtained by dead reckoning, and a system noise variance matrix Qi-1Comprises the following steps:
where er is wheel speed noise and ei is angular velocity noise;
calculating a one-step transition matrix A from the time i-1 to the time i in the time updating processi,i-1:
The system noise driving matrix W in the time update process is calculated by equations (6) and (7)i-1When the pose at the current later moment is smaller, namely w x dt is less than or equal to 0.0001:
when the rotation angle between the current and later time poses is large, namely w dt is greater than 0.0001:
where td is the distance the vehicle is advancing in the time period, tr is the angle the vehicle has turned in the time period:
td=v*dt (8)
tr=w*dt (9)
When the GPS positioning data is updated, fusing a GPS positioning result with a current dead reckoning result;
firstly, the estimated noise parameter is obtained according to the current GPS working state, and the measured noise variance matrix R of the system is calculated by the formula (3)i(ii) a At this time, the measured value Xi=(xi yi ψi) Namely the positioning coordinate and the course obtained by the GPS at the moment;
calculating a filter gain K:
calculating a state estimate from equations (14) and (15), whereinIs a prediction result obtained by dead reckoning.
Finally, the mean square error matrix P is updated, where I is the identity matrix of 3 x 3:
combining equations (1) to (16) to finally output the state variable X of the roboti。
The long-voyage farmland map building system based on multi-sensor fusion comprises a scanning module and a scanning module, wherein the scanning module comprises:
a module 21 for projecting the received frame of point cloud data onto a range image, each point in the range image representing a pixel of the image, the value d of the pixeliTarget point DiThe Euclidean distance to the robot lidar; applying an image-based segmentation method to the range image to group points into clusters, recording class labels of points in the range image, coordinates (x, y) in the range image, values d of the pointsi;
A module 22 for dividing the distance image equally into a plurality of sub-images, sorting the points in each row of the sub-images based on the smoothness C of the sub-images by setting a threshold CthJudging that each point belongs to a plane point or an angular point;
a module 23 for enabling the lidar odometer module to find the transition between two scans by performing a plane point and corner point scan matching;
a module 24 for registering the feature points with the cloud of surrounding points to further optimize the pose transformation, but run at a lower frequency; the LeGO-LOAM option stores every feature point set, rather than a single point cloud.
The long voyage farmland map construction method based on the multi-sensor fusion is characterized in that the scanning module is used for converting current RGB information into bag-of-word vector BRIEF descriptors B (p) through the following formula, and then corresponding the descriptors to corresponding point cloud data through timestamps to be used as bag-of-word vectors of the current point cloud data;
in the formula, Bi(p) is the ith bit of the descriptor; i () is the intensity of the pixel in the smoothed image; a isiAnd biA two-dimensional offset of the ith test point relative to the patch center;
converting the descriptor B (p) to the bag-of-words space to form a bag-of-words vector V describing the current visual imagetAnd measure similarity score s (v) of two bag-of-word vectors in the bag-of-word space1,v2) Wherein
Normalizing the scores to a vector VtThe best score that is desired to be obtained in this sequence, yields the normalized similarity score η:
in the formula, vt-ΔtA word bag vector representing a previous image;
if the eta is larger than the threshold value, the historical point cloud data and the current point cloud data corresponding to the historical RGB information with the similarity larger than the threshold value are fed back to the image building module of the laser radar.
The invention also provides a storage medium for storing a program for executing any one of the long-range farmland map construction methods based on multi-sensor fusion.
The invention further provides a client used for the long-range farmland map construction system based on the multi-sensor fusion.
According to the scheme, the invention has the advantages that:
the invention realizes instant positioning and high-precision mapping in a long-range mapping module by a multi-sensor fusion technology. On the basis of meeting the positioning frequency and precision, the RGB information of the external environment is scanned in real time by introducing a vision sensor, the detected information is fed back to the SLAM mapping technology of the laser radar in real time by visual word bag loop detection, and therefore the perceived environment map is further compensated and optimized, and the purpose of high precision is achieved. Compared with the construction accuracy of a single sensor, the long-range map construction accuracy provided by the invention is improved in all directions, and the effect is more obvious particularly in map turning and edge parts.
Drawings
FIG. 1 is a real shooting diagram of a mapping hardware integrated carrier;
FIG. 2 is a diagram of a main improved model of loop detection optimization;
FIG. 3 is a diagram of a main improved model based on IMU and Kalman filtering optimization;
FIG. 4 is an overall block flow diagram;
FIG. 5a is a diagram of a conventional prior art effect;
fig. 5b is a diagram of the improved technical effect of the invention.
Detailed Description
Based on the problems, the invention optimizes the whole scene graph building process through two improvement measures to meet the actual application requirements:
the method aims at the problem of low mapping accuracy caused by low efficiency of a SLAM loop detection technology of a single sensor due to real-time changeability of sensed external environment information in the long-range mapping process. The invention provides a long-range loopback detection technology, which is characterized in that a visual sensor is introduced to scan RGB information of an external environment in real time, and the detected information is fed back to a SLAM mapping module of a laser radar in real time through a bag of Binary Words loopback detection technology, so that a sensed environment map is further compensated and optimized in real time, and mapping errors are greatly reduced.
Since an Inertial Measurement Unit (IMU) can provide linear velocity and angular velocity information of object movement for a short period of time and the transmission frequency of the IMU can be up to 100 times the GPS transmission frequency. Therefore, the IMU module is fused on the basis of GPS positioning. When the GPS signal is lost temporarily or the GPS positioning frequency is reduced, the position and the attitude of the agricultural machinery and the map position are predicted by passing IMU feedback data through Kalman filtering, and the positioning requirement of map construction is guaranteed.
The inventor finds that the defects can be overcome by (1) introducing a visual sensor to scan RGB information of an external environment in real time, and feeding detected information back to an SLAM mapping module of a laser radar in real time through a visual bag-of-words loop detection technology, so as to further compensate and optimize a perceived environment map in real time. (2) By using the IMU sensor and the Kalman filter to perform positioning prediction between two frames of data sent by the GPS, the influence on the mapping precision due to the temporary loss of GPS signals or the sudden reduction of GPS positioning frequency is completely avoided. The method can optimize and update the map in real time through multi-sensor fusion in the map building process, so that the map building precision is greatly improved. In order to make the aforementioned features and advantages of the present invention more comprehensible, embodiments accompanied with figures are described in detail below.
The hardware integration carrier of the method proposed in the present invention is shown in fig. 1.
The specific operation flow chart of the method proposed by the invention is shown in fig. 2 and 3.
The overall algorithm of the present invention is described in fig. 4. The specific implementation mode is as follows:
1) systematically integrating hardware and initializing a similarity function value (selected according to the drawing construction precision requirement of practical application);
2) fuse GPS and IMU sensor through kalman filter, form multisensor and fuse orientation module, specifically include:
2.1) measurement equation of GPS: the Kalman filter in the system selects the state variable as the vehicle pose Xi=(xi yiψi). Let tkEstimated state x of timeiReceived system noise Gi-1Driving, expressed in equation form as:
Xi=Ai,i-1Xi-1+Wi-1Gi-1 (1)
in the formula, Ai,i-1Is ti-1To tiOne step transition matrix of time (found in equation 5), Wi-1The matrix (derived in equation 6) is driven for system noise.
In GPS positioning system, for XiSatisfies a linear relationship, and the measurement equation of the above formula 1 satisfies such a linear form as follows:
Zi=HiXi+Vi (2)
in the formula, HiFor measuring the matrix, ViTo measure the noise sequence.
2.2) Kalman filter prediction process: let QiIs a variance matrix of the system noise sequence, RiIn the present invention, the variance matrix for measuring the noise sequence is a non-negative matrix. When the filter is initialized, the position coordinates and the course of the first frame GPS are taken as the initial value X of the filter stateo=(xo yo ψo) Initial estimated mean square error matrix
Where ed is the distance estimation noise and ey is the heading estimation noise. When IMU sensor speed information and gyroscope angular velocity information are updated, a prediction solution is obtained by dead reckoning, and a system noise variance matrix Qi-1Comprises the following steps:
where er is the wheel speed noise and ei is the angular velocity noise.
Calculating a one-step transition matrix A from the time i-1 to the time i in the time updating processi,i-1:
The system noise driving matrix W in the time update process is calculated by equations (6) and (7)i-1When the pose is smaller between the current time and the later time, namely w × dt is less than or equal to 0.0001:
when the rotation angle between the current and later time poses is large, namely w dt is greater than 0.0001:
where td is the distance the vehicle is advancing in the time period, tr is the angle the vehicle has turned in the time period:
td=v*dt (8)
tr=w*dt (9)
And when the GPS positioning data is updated, fusing the GPS positioning result with the current dead reckoning result.
Firstly, the estimated noise parameter is obtained according to the current GPS working state, and the measured noise variance matrix R of the system is calculated by the formula (3)i. At this time, the measured value Xi=(xi yi ψi) I.e. the location coordinates and heading obtained by the GPS at that moment.
Calculating a filter gain K:
calculating a state estimate from equations (14) and (15), whereinIs the result of the prediction by dead reckoning.
Finally, the mean square error matrix P is updated, where I is the identity matrix of 3 x 3:
combining equations (1) to (16), and finally outputting the carrier state variable X from equation 15iI.e. the position and heading of the vehicle.
3) Based on a multi-sensor fusion positioning module, scanning external environment information in real time through a laser radar and converting the external environment information into point cloud data;
4) dynamically splicing the point cloud data through a Lego-Loam module, and performing real-time loop detection on the point cloud data based on distance, specifically comprising
4.1) Point cloud segmentation module
First, a frame of received point cloud is projected onto a 1800 × 16 image (horizontal accuracy 0.2 °, 16 lines lidar), each received point representing a pixel of the image, the value d of which is the valueiRepresents the corresponding point DiEuclidean distance to sensor.
After the imaging is completed, a matrix is obtained, each column of the matrix is estimated (namely, the estimation of the ground is completed), and the ground point is extracted before the segmentation. Points that may represent the ground are labeled ground points and are not used for segmentation.
Then, an image-based segmentation method is applied to the range image to group the points into a plurality of clusters. Points of the same class have the same one label (ground is a more specific class); meanwhile, in order to complete feature extraction quickly and reliably, classes of less than 30 points are ignored.
After this step is completed, only points that may represent large objects remain in the image, and then three values are recorded for the following points: (1) a label for the class; (2) coordinates (x, y) of the point in the matrix; (3) the value of the point, i.e. diUsed in the next modules.
4.2) feature extraction Module
First, feature point extraction is performed, and in order to extract features uniformly from all directions, an image is horizontally divided into several equal sub-images. We then order the points in each row of the sub-image based on their smoothness c. Then passes through the set threshold CthTo determine plane or angular points.
4.3) laser Radar odometer Module
The lidar odometer module estimates sensor motion between two consecutive scans. The transition between the two scans is found by performing point-edge and point-plane scan matching.
The laser radar odometer module estimates sensor motion between two continuous scans, uses the pose output in formula (15) as the initialization state of the odometer, then calculates the conversion between the two scans by performing point-edge and point-plane scan matching based on the initialization state, and the conversion information is applied to the preliminary registration of the radar point cloud in the point cloud registration module.
4.4) Point cloud registration Module
In the point cloud registration module, firstly, conversion between two scans output by a radar odometer is used, primary registration is carried out on the characteristic points and the surrounding point cloud picture, and then posture transformation is further optimized, but the operation is carried out at a lower frequency; the LeGO-LOAM option stores each feature point set (including a planar point set or a corner point set) instead of a single point cloud.
Let Mt-1Is to save a collection of all previous feature point sets. While scanning, Mt-1Each feature set in (a) is also associated with the pose of the sensor. Thus, we can be based on M1-1To obtain the surrounding point cloud mapFirst, by selecting a set of feature points in the field of view of the sensorFor simplicity, a set of feature points is selected whose (feature point set associated) sensor pose is within 100 meters of the current position of the sensor. The selected feature point set is then transformed and fused into a single surrounding map, resulting in
The sensor pose of each set of feature points can be modeled as a node in a pose graph. The set of feature points may be considered as sensor measurements for this node. Since the pose estimation drift of the point cloud registration module is very low, it is assumed that there is no drift in a short time. In this way, a set of nearest feature points may be selected to formThe transformation obtained after L-M optimization can then be used to add new nodes andis determined by the spatial constraints between the selected nodes in (a). The drift of the module can be further eliminated by performing closed loop detection. In this case, if a match is found between the current set of feature points and the previous set of feature points using ICP, a new constraint is added. The estimated pose of the sensor is then updated by sending the pose map to the optimization system.
5) Compensating a loop detection mechanism in a Lego-Loam algorithm through a loop detection model based on a visual sensor, which specifically comprises the following steps:
5.1) converting each frame of visual data into bag-of-word vector BRIEF descriptor B (p) by formula (17), and then corresponding the descriptor to the corresponding point cloud data through a time stamp to be used as the bag-of-word vector of the point cloud data. For each feature point p in the visual data, its descriptor is calculated according to the following formula:
in the formula, Bi(p) is the ith bit of the descriptor; i () is the intensity of the pixel in the smoothed image; a isiAnd biThe two-dimensional offset of the ith test point with respect to the patch center (the center of the pixel range under consideration), LbTo be the range of pixels considered when computing the descriptor.
5.2) converting the descriptor B (p) into bag-of-word space to form a bag-of-word vector describing one frame of visual image, and measuring two bag-of-word vectors v1And v2The similarity between them, the present invention calculates a score s (v)1,v2) This value is [0, 1 ]]In the meantime.
5.3) the present invention uses an image database to store and retrieve similar images. In acquiring an image ITThen, convert it into bag of words vector VtAnd then stored in the image database. After a new frame of image is acquired and converted into a bag-of-words vector, whether the image is similar to the previous image or not can be judged by accessing and comparing the image database. The range of variation of these scores, derived from equation (18), depends largely on the query image and its contained word distribution. The invention then normalizes these scores into a vector VtThe best score expected to be obtained in this sequence, the normalized similarity score η
In the formula, vt-ΔtA word bag vector representing the previous image.
And solving eta, and if the eta is larger than a set value, returning the frame number of a previous point cloud data frame similar to the current frame to the laser radar odometer module, so that the map is further corrected and optimized, and the map building precision is improved. The principle is that after similar scenes are detected through visual images, the current scene and the similar scenes can be optimized in corresponding modules again, and therefore the accuracy of point cloud map construction is improved. The reason why the accuracy can be improved is that the laser point cloud lacks color information, and whether the similar scene is reached cannot be judged through the point cloud, so that the optimization cannot be made in time when the similar scene is reached.
The method comprises the steps that a current point cloud and a previous similar point cloud are simultaneously input into a laser radar odometer module, the previous similar point cloud can be a plurality of point clouds, at least two point cloud pictures are input into the odometer module to improve map construction accuracy, any existing technology can be adopted for optimizing a final result by means of mutual coordination of the two point cloud pictures, the existing technology is not a core invention point of the method, and the method is characterized in that how to select historical point clouds similar to the current point cloud.
6) Converting the 3D point cloud map into a 2D map through an Octomap module;
7) and storing the 2D map through a Mapsever module to form a long-range grid map required by practical application.
Fig. 5 shows the results of the present invention. Fig. 5(a) shows an experimental effect when a conventional single lidar sensor is used for mapping, and fig. 5(b) shows a mapping effect of the long-range accurate full-scene mapping method with multi-sensor fusion provided by the invention. As can be seen from FIG. 5, the long-range mapping accuracy provided by the invention is improved in all directions compared with the mapping accuracy of the mapping of a single sensor, and the effect is more obvious particularly in the turning and edge parts of the map.
The following are system examples corresponding to the above method examples, and this embodiment can be implemented in cooperation with the above embodiments. The related technical details mentioned in the above embodiments are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the above-described embodiments.
The invention also provides a long-range farmland map construction system based on multi-sensor fusion, which comprises the following steps:
the robot continuously acquires positioning data through a GPS module of the robot, and updates the positioning data by using an inertial sensor of the robot in a time period between the two frames of positioning data so as to obtain a state variable containing the real-time position and the course of the robot;
the scanning module is used for scanning the external environment information in real time through a laser radar of the robot and converting the external environment information into current point cloud data, meanwhile, the robot scans the RGB information of the external environment in real time through a visual sensor, compares the RGB information with historical RGB information collected historically, and feeds the historical point cloud data and the current point cloud data corresponding to the historical RGB information with similarity greater than a threshold value back to the image building module of the laser radar to obtain enhanced point cloud data under the current state variable;
and the mapping module is used for fusing the current point cloud data or the enhanced point cloud data under each state variable to obtain a point cloud map after the robot finishes the preset route in the farmland map construction task, and converting the point cloud map into a two-dimensional map as an execution result of the farmland map construction task.
The long-range farmland map building system based on multi-sensor fusion comprises an initial module and a second module, wherein the initial module comprises:
a module 11, configured to invoke the kalman filter in the robot to select a state variable as the vehicle pose Xi=(xi yiψi) (ii) a Let tkEstimated state x of timeiReceived system noise Gi-1Driving, expressed in equation form as:
Xi=Ai,i-1Xi-1+Wi-1Gi-1 (1)
in the formula, Ai,i-1Is ti-1To tiOne step transition matrix of time, Wi-1Driving a matrix for system noise;
in GPS positioning system, for XiThe measurement of (a) satisfies a linear relationship, and the measurement equation is:
Zi=HiXi+Vi (2)
in the formula, HiFor measuring the matrix, ViFor measuring noise sequences, ZiThe positioning data at the ith moment;
module 12 for setting QiIs a variance matrix of the system noise sequence, RiMeasuring a variance matrix of the noise sequence, wherein both the variance matrix and the non-negative matrix are non-negative matrixes; when the Kalman filter is initialized, the position coordinates and the course of the first frame GPS are taken as the initial value X of the state of the filtero=(xo yo ψo) Initial estimated mean square error matrix:
wherein ed is distance estimation noise and ey is heading estimation noise; when the velocity information of the inertial sensor and the angular velocity information of the gyroscope are updated, a prediction solution is obtained by dead reckoning, and a system noise variance matrix Qi-1Comprises the following steps:
where er is wheel speed noise and ei is angular velocity noise;
calculating a one-step transition matrix A from the time i-1 to the time i in the time updating processi,i-1:
The system noise driving matrix W in the time update process is calculated by equations (6) and (7)i-1When the pose is smaller between the current time and the later time, namely w × dt is less than or equal to 0.0001:
when the rotation angle between the current and later time poses is large, namely w dt is greater than 0.0001:
where td is the distance the vehicle is advancing in the time period, tr is the angle the vehicle has turned in the time period:
td=v*dt (8)
tr=w*dt (9)
When the GPS positioning data is updated, fusing a GPS positioning result with a current dead reckoning result;
firstly, according to the estimated noise parameter obtained from current GPS state and using formula (3) to calculate the measured noise variance matrix R of systemi(ii) a At this time, the measured value Xi=(xi yi ψi) Namely the positioning coordinate and the course obtained by the GPS at the moment;
calculating a filter gain K:
calculating a state estimate from equations (14) and (15), whereinIs the result of the prediction by dead reckoning.
Finally, the mean square error matrix P is updated, where I is the identity matrix of 3 x 3:
combining equations (1) to (16) to finally output the state variable X of the roboti。
The long-range farmland map building system based on multi-sensor fusion, wherein the scanning module comprises:
a module 21 for projecting the received frame of point cloud data onto a range image, each point in the range image representing a pixel of the image, the value d of the pixeliTarget point DiThe Euclidean distance to the robot lidar; applying an image-based segmentation method to the range image to group points into clusters, recording class labels of points in the range image, coordinates (x, y) in the range image, values d of the pointsi;
A module 22 for dividing the distance image equally into a plurality of sub-images, sorting the points in each row of the sub-images based on the smoothness c of the sub-images bySetting a threshold value CthJudging that each point belongs to a plane point or an angular point;
a module 23 for enabling the lidar odometer module to find the transition between the two scans by performing a plane point and corner point scan matching;
a module 24 for registering feature points with surrounding point clouds to further optimize pose transformation, but run at a lower frequency; the LeGO-LOAM option stores every feature point set, rather than a single point cloud.
The long voyage farmland map construction method based on the multi-sensor fusion is characterized in that the scanning module is used for converting current RGB information into bag-of-word vector BRIEF descriptors B (p) through the following formula, and then the descriptors are corresponding to corresponding point cloud data through time stamps to serve as bag-of-word vectors of the current point cloud data;
in the formula, Bi(p) is the ith bit of the descriptor; i () is the intensity of the pixel in the smoothed image; a isiAnd biA two-dimensional offset of the ith test point relative to the patch center;
converting the descriptor B (p) to the bag-of-words space to form a bag-of-words vector V describing the current visual imagetAnd measure similarity score s (v) of two bag-of-word vectors in the bag-of-word space1,v2) Wherein
Normalizing the scores to a vector VtThe best score that is desired to be obtained in this sequence, yields the normalized similarity score η:
in the formula, vt-ΔtWords representing the previous imageA pocket vector;
if the eta is larger than the threshold value, the historical point cloud data and the current point cloud data corresponding to the historical RGB information with the similarity larger than the threshold value are fed back to the image building module of the laser radar.
The invention also provides a storage medium for storing a program for executing any one of the long-range farmland map construction methods based on multi-sensor fusion.
The invention further provides a client used for the long-range farmland map construction system based on the multi-sensor fusion.
Claims (10)
1. A long-range farmland map construction method based on multi-sensor fusion is characterized by comprising the following steps:
step 1, after a farmland map construction task is started, a robot continuously acquires positioning data through a GPS module of the robot, and updates the positioning data by using an inertial sensor of the robot in a time period between two frames of positioning data so as to obtain a state variable containing a real-time position and a course of the robot;
step 2, the robot scans external environment information in real time through a laser radar of the robot and converts the external environment information into current point cloud data, meanwhile, the robot scans RGB information of the external environment in real time through a visual sensor, the RGB information is compared with historical RGB information collected historically, historical point cloud data and current point cloud data corresponding to the historical RGB information with similarity larger than a threshold value are fed back to a mapping module of the laser radar, and enhanced point cloud data under the current state variable are obtained;
and 3, repeating the step 1 and the step 2 until the robot completes a preset route in the farmland map construction task, fusing current point cloud data or enhanced point cloud data under each state variable to obtain a point cloud map, and converting the point cloud map into a two-dimensional map as an execution result of the farmland map construction task.
2. The long-range farmland map building method based on multi-sensor fusion of claim 1, wherein the step 1 comprises:
step 11, selecting a state variable as a vehicle pose X by a Kalman filter in the roboti=(xi yi ψi),xiRepresents tiX-coordinate, y-coordinate of time vehicle in world coordinate systemiRepresenting the y coordinate, #iRepresents tiThe course of the vehicle is taken down at any time; let tkEstimated state x of timeiReceived system noise Gi-1Driving, expressed in equation form as:
Xi=Ai,i-1Xi-1+Wi-1Gi-1 (1)
in the formula, Ai,i-1Is ti-1To tiOne step transition matrix of time, Wi-1Driving a matrix for system noise;
in GPS positioning system, for XiThe measurement of (a) satisfies a linear relationship, and the measurement equation is:
Zi=HiXi+Vi (2)
in the formula, HiFor measuring the matrix, ViFor measuring noise sequences, ZiThe positioning data at the ith moment;
step 12, setting QiIs a variance matrix of the system noise sequence, RiMeasuring a variance matrix of the noise sequence, wherein both the variance matrix and the non-negative matrix are non-negative matrixes; when the Kalman filter is initialized, the position coordinates and the course of the first frame GPS are taken as the initial value X of the state of the filtero=(xoyo ψo) Initial estimated mean square error matrix:
wherein ed is distance estimation noise and ey is heading estimation noise; when the velocity information of the inertial sensor and the angular velocity information of the gyroscope are updated, a prediction solution is obtained by dead reckoning, and a system noise variance matrix Qi-1Comprises the following steps:
where er is wheel speed noise, ei is angular velocity noise, and dt represents unit time;
calculating a one-step transition matrix A from the time i-1 to the time i in the time updating processi,i-1:
The system noise driving matrix W in the time update process is calculated by equations (6) and (7)i-1When the pose is smaller between the current time and the later time, namely w × dt is less than or equal to 0.0001:
when the rotation angle between the current and later time poses is large, namely w dt is greater than 0.0001:
where td is the distance the vehicle is advancing within the time period, tr is the angle the vehicle has turned within the time period:
td=v*dt (8)
tr=w*dt (9)
When the GPS positioning data is updated, fusing a GPS positioning result with a current dead reckoning result;
firstly, according to the estimated noise parameter obtained from current GPS state and using formula (3) to calculate the measured noise variance matrix R of systemi(ii) a At this time, the measured value Xi=(xi yi ψi) Namely the positioning coordinate and the course obtained by the GPS at the moment;
calculating a filter gain K:
calculating a state estimate from equations (14) and (15), whereinIs a prediction result obtained by dead reckoning.
Finally, the mean square error matrix P is updated, where I is the identity matrix of 3 x 3:
combining equations (1) to (16) to finally output the state variable X of the roboti。
3. The long-range farmland map building method based on multi-sensor fusion of claim 1 or 2, wherein the step 2 comprises:
step 21, projecting the received frame of point cloud data to a distance image, wherein each point in the distance image represents a pixel of the image, and the value d of the pixeliTarget point DiThe Euclidean distance to the robot lidar; applying an image-based segmentation method to the range image to group points into clusters, recording class labels of points in the range image, coordinates (x, y) in the range image, values d of the pointsi;
Step 22, dividing the distance image into a plurality of sub-images equally, sorting the points in each line of the sub-images based on the smoothness C of the sub-images, and setting a threshold value CthJudging that each point belongs to a plane point or an angular point;
step 23, the laser radar odometer module finds the conversion between two scans by executing the scanning matching of the plane point and the angular point;
step 24, registering the characteristic points with the surrounding point cloud pictures to further optimize posture transformation but operate at a lower frequency; the LeGO-LOAM option stores every feature point set, rather than a single point cloud.
4. The long-range farmland map building method based on multi-sensor fusion according to claim 3, wherein the step 2 comprises:
converting the current RGB information into bag-of-words vector BRIEF descriptor B (p) by the following formula, and then corresponding the descriptor to corresponding point cloud data through a time stamp to be used as the bag-of-words vector of the current point cloud data;
in the formula, Bi(p) is the ith bit of the descriptor; i () is the intensity of the pixel in the smoothed image; a isiAnd biA two-dimensional offset of the ith test point relative to the patch center;
conversion of descriptor B (p) to bag-of-words space to form a description of the current visual imageBag of words vector VtAnd measure similarity score s (v) of two bag-of-word vectors in the bag-of-word space1,v2) Wherein
Normalizing the scores to a vector VtThe best score that is desired to be obtained in this sequence, yields the normalized similarity score η:
in the formula, vt-ΔtA word bag vector representing a previous image;
if the eta is larger than the threshold value, the historical point cloud data and the current point cloud data corresponding to the historical RGB information with the similarity larger than the threshold value are fed back to the image building module of the laser radar.
5. A long voyage farmland map building system based on multi-sensor fusion is characterized by comprising the following components:
the robot continuously acquires positioning data through a GPS module of the robot, and updates the positioning data by using an inertial sensor of the robot in a time period between the two frames of positioning data so as to obtain a state variable containing the real-time position and the course of the robot;
the scanning module is used for scanning the external environment information in real time through a laser radar of the robot and converting the external environment information into current point cloud data, meanwhile, the robot scans the RGB information of the external environment in real time through a visual sensor, compares the RGB information with historical RGB information collected historically, and feeds the historical point cloud data and the current point cloud data corresponding to the historical RGB information with similarity greater than a threshold value back to the image building module of the laser radar to obtain enhanced point cloud data under the current state variable;
and the mapping module is used for fusing the current point cloud data or the enhanced point cloud data under each state variable to obtain a point cloud map after the robot finishes the preset route in the farmland map construction task, and converting the point cloud map into a two-dimensional map as an execution result of the farmland map construction task.
6. The multi-sensor fusion-based long-range farmland mapping system of claim 5, wherein the initial module comprises:
a module 11, configured to invoke the kalman filter in the robot to select a state variable as the vehicle pose Xi=(xi yi ψi) (ii) a Let tkEstimated state x of timeiReceived system noise Gi-1Driving, expressed in equation form as:
Xi=Ai,i-1Xi-1+Wi-1Gi-1 (1)
in the formula, Ai,i-1Is ti-1To tiOne step transition matrix of time, Wi-1Driving a matrix for system noise;
in GPS positioning system, for XiThe measurement of (a) satisfies a linear relationship, and the measurement equation is:
Zi=HiXi+Vi (2)
in the formula, HiFor measuring the matrix, ViFor measuring noise sequences, ZiThe positioning data at the ith moment;
module 12 for setting QiIs a variance matrix of the system noise sequence, RiMeasuring a variance matrix of the noise sequence, wherein both the variance matrix and the non-negative matrix are non-negative matrixes; when the Kalman filter is initialized, the position coordinates and the course of the first frame GPS are taken as the initial value X of the state of the filtero=(xo yo ψo) Initial estimated mean square error matrix:
wherein ed is distance estimation noise and ey is heading estimation noise; when the velocity information of the inertial sensor and the angular velocity information of the gyroscope are updated, a prediction solution is obtained by dead reckoning, and a system noise variance matrix Qi-1Comprises the following steps:
where er is wheel speed noise and ei is angular velocity noise;
calculating a one-step transition matrix A from the time i-1 to the time i in the time updating processi,i-1:
The system noise driving matrix W in the time update process is calculated by equations (6) and (7)i-1When the pose is smaller between the current time and the later time, namely w × dt is less than or equal to 0.0001:
when the rotation angle between the current and later time poses is large, namely w dt is greater than 0.0001:
where td is the distance the vehicle is advancing within the time period, tr is the angle the vehicle has turned within the time period:
td=v*dt (8)
tr=w*dt (9)
When the GPS positioning data is updated, fusing a GPS positioning result with a current dead reckoning result;
firstly, the estimated noise parameter is obtained according to the current GPS working state, and the measured noise variance matrix R of the system is calculated by the formula (3)i(ii) a At this time, the measured value Xi=(xi yi ψi) Namely the positioning coordinate and the course obtained by the GPS at the moment;
calculating a filter gain K:
calculating a state estimate from equations (14) and (15), whereinIs a prediction result obtained by dead reckoning.
Finally, the mean square error matrix P is updated, where I is the identity matrix of 3 x 3:
combinations (1) to (16) of which the state variable χ of the robot is finally outputi。
7. The long-range farmland mapping system based on multi-sensor fusion of claim 5 or 6, wherein the scanning module comprises:
a module 21 for projecting the received frame of point cloud data onto a range image, each point in the range image representing a pixel of the image, the value d of the pixeliTarget point DiThe Euclidean distance to the robot lidar; applying an image-based segmentation method to the range image to group points into clusters, recording class labels of points in the range image, coordinates (x, y) in the range image, values d of the pointsi;
A module 22 for dividing the distance image equally into a plurality of sub-images, sorting the points in each row of the sub-images based on the smoothness C of the sub-images by setting a threshold CthJudging that each point belongs to a plane point or an angular point;
a module 23 for enabling the lidar odometer module to find the transition between two scans by performing a plane point and corner point scan matching;
a module 24 for registering feature points with surrounding point clouds to further optimize pose transformation, but run at a lower frequency; the LeGO-LOAM option stores every feature point set, rather than a single point cloud.
8. The long-range farmland map building method based on multi-sensor fusion of claim 7, wherein the scanning module is used for converting the current RGB information into bag-of-words vector BRIEF descriptor B (p) by the following formula, and then corresponding the descriptor to the corresponding point cloud data by the timestamp as the bag-of-words vector of the current point cloud data;
in the formula,Bi(p) is the ith bit of the descriptor; i () is the intensity of the pixel in the smoothed image; a isiAnd biA two-dimensional offset of the ith test point relative to the patch center;
converting the descriptor B (p) to the bag-of-words space to form a bag-of-words vector V describing the current visual imagetAnd measure similarity score s (v) of two bag-of-word vectors in the bag-of-word space1,v2) Wherein
Normalizing the scores to a vector VtThe best score that is desired to be obtained in this sequence, yields the normalized similarity score η:
in the formula, vt-ΔtA word bag vector representing a previous image;
if the eta is larger than the threshold value, the historical point cloud data and the current point cloud data corresponding to the historical RGB information with the similarity larger than the threshold value are fed back to the image building module of the laser radar.
9. A storage medium storing a program for executing the long-range farmland mapping method based on multi-sensor fusion according to any one of claims 1 to 4.
10. A client for the long-range farmland map building system based on the multi-sensor fusion of any one of the claims 5 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210307473.0A CN114689035A (en) | 2022-03-25 | 2022-03-25 | Long-range farmland map construction method and system based on multi-sensor fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210307473.0A CN114689035A (en) | 2022-03-25 | 2022-03-25 | Long-range farmland map construction method and system based on multi-sensor fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114689035A true CN114689035A (en) | 2022-07-01 |
Family
ID=82139872
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210307473.0A Pending CN114689035A (en) | 2022-03-25 | 2022-03-25 | Long-range farmland map construction method and system based on multi-sensor fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114689035A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116660923A (en) * | 2023-08-01 | 2023-08-29 | 齐鲁空天信息研究院 | Unmanned agricultural machinery library positioning method and system integrating vision and laser radar |
CN117109583A (en) * | 2023-08-11 | 2023-11-24 | 淮阴工学院 | Positioning navigation method for agricultural transportation equipment |
WO2024055412A1 (en) * | 2022-09-15 | 2024-03-21 | 深圳市正浩创新科技股份有限公司 | Map construction method and apparatus, and self-moving device |
-
2022
- 2022-03-25 CN CN202210307473.0A patent/CN114689035A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024055412A1 (en) * | 2022-09-15 | 2024-03-21 | 深圳市正浩创新科技股份有限公司 | Map construction method and apparatus, and self-moving device |
CN116660923A (en) * | 2023-08-01 | 2023-08-29 | 齐鲁空天信息研究院 | Unmanned agricultural machinery library positioning method and system integrating vision and laser radar |
CN116660923B (en) * | 2023-08-01 | 2023-09-29 | 齐鲁空天信息研究院 | Unmanned agricultural machinery library positioning method and system integrating vision and laser radar |
CN117109583A (en) * | 2023-08-11 | 2023-11-24 | 淮阴工学院 | Positioning navigation method for agricultural transportation equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Shan et al. | Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping | |
CN109709801B (en) | Indoor unmanned aerial vehicle positioning system and method based on laser radar | |
CN110009739B (en) | Method for extracting and coding motion characteristics of digital retina of mobile camera | |
CN112634451B (en) | Outdoor large-scene three-dimensional mapping method integrating multiple sensors | |
CN109211241B (en) | Unmanned aerial vehicle autonomous positioning method based on visual SLAM | |
CN114689035A (en) | Long-range farmland map construction method and system based on multi-sensor fusion | |
CN109522832B (en) | Loop detection method based on point cloud segment matching constraint and track drift optimization | |
CN110044354A (en) | A kind of binocular vision indoor positioning and build drawing method and device | |
CN113706626B (en) | Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction | |
CN113485441A (en) | Distribution network inspection method combining unmanned aerial vehicle high-precision positioning and visual tracking technology | |
CN111366153B (en) | Positioning method for tight coupling of laser radar and IMU | |
WO2021021862A1 (en) | Mapping and localization system for autonomous vehicles | |
CN113985429A (en) | Unmanned aerial vehicle environment scanning and reconstructing method based on three-dimensional laser radar | |
CN115272596A (en) | Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene | |
CN114001733A (en) | Map-based consistency efficient visual inertial positioning algorithm | |
CN111998862A (en) | Dense binocular SLAM method based on BNN | |
CN111812978B (en) | Cooperative SLAM method and system for multiple unmanned aerial vehicles | |
CN113763549A (en) | Method, device and storage medium for simultaneous positioning and mapping by fusing laser radar and IMU | |
CN116878501A (en) | High-precision positioning and mapping system and method based on multi-sensor fusion | |
Zhang | LILO: A Novel Lidar–IMU SLAM System With Loop Optimization | |
CN115077519A (en) | Positioning and mapping method and device based on template matching and laser inertial navigation loose coupling | |
WO2022021661A1 (en) | Gaussian process-based visual positioning method, system, and storage medium | |
CN116577801A (en) | Positioning and mapping method and system based on laser radar and IMU | |
Li et al. | A collaborative relative localization method for vehicles using vision and LiDAR sensors | |
CN116977628A (en) | SLAM method and system applied to dynamic environment and based on multi-mode semantic framework |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |