CN114812581B - Cross-country environment navigation method based on multi-sensor fusion - Google Patents

Cross-country environment navigation method based on multi-sensor fusion Download PDF

Info

Publication number
CN114812581B
CN114812581B CN202210714299.1A CN202210714299A CN114812581B CN 114812581 B CN114812581 B CN 114812581B CN 202210714299 A CN202210714299 A CN 202210714299A CN 114812581 B CN114812581 B CN 114812581B
Authority
CN
China
Prior art keywords
point
path
global
node
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210714299.1A
Other languages
Chinese (zh)
Other versions
CN114812581A (en
Inventor
梁华为
李志远
苏涛
章松
王健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Institutes of Physical Science of CAS
Original Assignee
Hefei Institutes of Physical Science of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Institutes of Physical Science of CAS filed Critical Hefei Institutes of Physical Science of CAS
Priority to CN202210714299.1A priority Critical patent/CN114812581B/en
Publication of CN114812581A publication Critical patent/CN114812581A/en
Application granted granted Critical
Publication of CN114812581B publication Critical patent/CN114812581B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a cross-country environment navigation method based on multi-sensor fusion, which comprises the steps of establishing a global road network based on a high-resolution satellite map, and planning a global path by using an Astar algorithm according to coordinate information of a task file to obtain a global reference path; acquiring vehicle-mounted multi-sensor data by using vehicle-mounted equipment for data fusion, inputting the data into a SLAM algorithm after the data fusion, adding various constraints, and solving by using a frame based on graph optimization to obtain an accurate pose; splicing a plurality of environment observations, and constructing a cross-country topographic map and a global occupancy grid map based on a filtering fusion algorithm; extracting a current drivable area of the vehicle in real time according to the obtained global reference path; and generating a local optimal path for obstacle avoidance navigation in the current travelable area by using a RRT path planning algorithm based on multi-attribute evaluation. The invention achieves rapid and accurate obstacle avoidance navigation in an unknown off-road environment by utilizing multi-sensor fusion to perform accurate positioning and construct a global occupancy grid map.

Description

Cross-country environment navigation method based on multi-sensor fusion
Technical Field
The invention relates to the technical field of multi-sensor fusion navigation, in particular to a cross-country environment navigation method based on multi-sensor fusion.
Background
The SLAM (simultaneous localization and mapping) technology can simultaneously sense the surrounding environment and estimate the self pose, and can realize incremental map construction in an unknown environment, so that the dependence on satellite localization and prior maps is obviously reduced when the automatic driving vehicle autonomously navigates. Due to the influence of factors such as unstructured roads, irregular environmental characteristics, unstable satellite signals and the like in the off-road environment, the autonomous positioning of the vehicle by using single sensor data faces a huge challenge; in the aspect of map construction, the information of the off-road environment is complicated, the road boundaries are uneven, passable areas are difficult to extract, and the difficulty of constructing a navigation map is increased.
Different from two-dimensional navigation in urban roads, when an unmanned vehicle navigates autonomously in an off-road environment, a common path planning algorithm cannot generate a path quickly and effectively due to the fact that the off-road lacks information such as lane lines and boundaries. The RRT (rapid expansion random tree) is a path planning algorithm capable of rapidly generating a safe feasible path, is widely applied to robots and autonomous driving directions, and has great potential in realizing rapid and accurate obstacle avoidance navigation in off-road roads.
The prior art has the defects that the existing unmanned vehicle autonomous navigation in the cross-country environment cannot normally build a map, complete a navigation task and realize rapid and accurate obstacle avoidance operation due to the fact that unstructured roads, lack of navigation information and unstable satellite information exist.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and in order to realize the purpose, the off-road environment navigation method based on multi-sensor fusion is adopted to solve the problems in the background technology.
A cross-country environment navigation method based on multi-sensor fusion specifically comprises the following steps:
s1, establishing a global road network based on the high-resolution satellite map, and performing global path planning by using an Astar algorithm according to the coordinate information of the task file to obtain a global reference path;
s2, collecting vehicle-mounted multi-sensor data by using vehicle-mounted equipment to perform data fusion, inputting the fused multi-modal data into an SLAM algorithm, adding various sensor constraints, and solving by using a frame based on graph optimization to obtain an accurate pose;
s3, splicing a plurality of environment observations by using the obtained accurate pose, and constructing a cross-country topographic map and a global occupancy grid map based on a filtering fusion algorithm;
step S4, extracting the current driving area of the vehicle in real time according to the obtained global reference path;
and S5, generating a local optimal path for obstacle avoidance navigation in the current travelable area by using an RRT path planning algorithm based on multi-attribute evaluation.
As a further aspect of the invention: the specific steps of step S1 include:
firstly, a task area of an autonomous unmanned vehicle is preset, and a task map corresponding to the task area is obtained according to a picture based on a high-resolution satellite map;
selecting a reference point GPS coordinate according to the obtained task map, extracting all road information in the task map and establishing a global road network;
according to the coordinates of a starting point, a task point and an end point in the navigation task file, carrying out global path planning by using an Astar algorithm to obtain node topology information from the starting point to the end point in a global road network;
and splicing the initial global path according to the topological relation, and smoothing the initial global path by using a Bezier curve fitting algorithm to obtain a global reference path in the task area.
As a further aspect of the invention: the specific steps of collecting vehicle-mounted multi-sensor data by using vehicle-mounted equipment for data fusion in the step S2 include:
the method comprises the steps that read multi-modal data are collected in real time through a vehicle-mounted industrial personal computer, the multi-modal data comprise 3D point cloud data obtained by collecting surrounding environment information through a 3D laser radar, high-frequency linear acceleration and angular velocity of a vehicle are collected through an IMU, vehicle speed is collected through a wheel type odometer, and real-time longitude and latitude coordinates of the current vehicle are collected through a GPS;
respectively calculating the curvature of each laser point, giving characteristic information of a line or a plane of the laser point according to the curvature, and clustering point clouds to eliminate discrete points; constructing pre-integral observation by batch processing of the acceleration and angular velocity acquired by the IMU and the vehicle speed data acquired by the wheel type odometer; converting the real-time longitude and latitude coordinates into space position coordinates;
and performing fusion processing on the preprocessed multi-modal data.
As a further scheme of the invention: the specific steps of inputting the fused multi-modal data into the SLAM algorithm in step S2, adding various sensor constraints, and obtaining an accurate pose by using a frame solution based on graph optimization include:
constructing a laser radar point cloud residual error, an IMU (inertial measurement Unit), a wheel type odometer pre-integration residual error and a GPS (global positioning system) prior residual error according to the obtained multi-modal data, and establishing a nonlinear least square problem;
and finally, splicing the point clouds according to the vehicle pose to obtain an environment point cloud map.
As a further aspect of the invention: the specific steps of step S3 include:
acquiring single-frame laser radar point cloud data, marking obstacles according to gradients, acquiring passable area data in a single-frame grid map, deducing unobservable point cloud data by using a Gaussian process, and updating single-frame point cloud observation to a global occupancy grid map through binary Bayes filtering;
and marking the elevation data of the single-frame elevation grid map according to the maximum height, and updating the global elevation grid map by using Kalman filtering.
As a further aspect of the invention: the specific steps of step S4 include:
the vehicle is driven according to the obtained global reference path, a passable road surface area is detected according to the point cloud data, and discontinuous road surfaces are abandoned according to the distance continuity index and the angle continuity index;
the distance continuity indexc d Comprises the following steps:
Figure 370923DEST_PATH_IMAGE001
in the formula,nthe number of points which need to be calculated before and after the current iteration point of the index,kfromx-nTox+nIndicating that the indicator takes into account the current pointp x Front sidenPoint and backnThe point of the light beam is the point,p k is as followskThe point of the light beam is the point,p k+1 is as followsk+1 point;
the angle continuity indexc a Comprises the following steps:
Figure 977484DEST_PATH_IMAGE002
in the formula,nthe number of points which need to be calculated for the index before and after the current iteration point,p x the point representing the position of the current iteration,p x-k indicating the current iteration position is preceded bykThe point of the light beam is the point,p x+k indicating the position of the current iteration afterkPoint;
when in usec d Less than a distance continuity indicator thresholdc th d And is andc a less than the angle continuity indicator thresholdc th a And then, the ground points are continuous, the ground points are contained in the travelable region, and the region of the extracted road points is recorded as the travelable regionΩ 1
Because the radar cannot directly detect all drivable road surfaces, firstly, clustering real-time 3D point clouds around the vehicle by using an Euclidean distance method to obtain obstacle clusters around the vehicle; using the obstacle cluster as barrier and using the barrier cluster to two sides along the vertical direction of the global reference pathThe flood filling algorithm diffuses the front area to form a travelable area of the vehicleΩ 2
Finally, the union of the two travelable areas is calculatedΩ 1 Ω 2 As the final travelable area.
As a further aspect of the invention: the specific steps of step S5 include:
according to the obtained travelable regionΩFeasible domain as path planningΦCalculate the feasible regionΦLongitudinal length in the direction of the global reference pathL Φ Then according to the travelable regionΩLength of (2)L Ω And minimum planned distanceL p,min Selecting a preview length for path planningL p WhereinL p,min L p <L Ω
preview length according to path planningL p Finding out a corresponding global path coordinate point in the global reference path, taking the coordinate point as a pre-aiming point of path planning, exploring around by taking the pre-aiming point as a circle center, and calculating to obtain a target coordinate point in a planning feasible regionΦCircle of inner maximumC p,goal
Initializing random treesTStep size of growthρUsing the rear axle center of the vehicle as a random treeTInitial node ofq start In the circleC p,goal Internal random selectionNThe coordinate points are used as target nodes of the random treeq goal Calculating to obtain a collision-free path;
based on the random tree corresponding to each path, taking nodes of the random tree as control points of the Bezier curve, and smoothing by using the Bezier curve to obtain a smooth path; the path is smoothed by using a curve fitting algorithm, and the quality of the path is effectively optimized.
And for each candidate path, selecting the path with the minimum cost as the current optimal path by adopting a multi-attribute evaluation function and setting a reasonable weight, wherein the multi-attribute evaluation function comprises the cost of transverse deviation, the cost of curvature of the candidate path and the distance cost from the candidate path to the obstacle.
As a further aspect of the invention: target node of the random treeq goal The specific steps of calculating to obtain a collision-free path include:
for each target nodeq i goal Planning feasible domains with specific probabilitiesΦObtaining random node by middle random samplingq i rand Selecting a random nodeq i rand Nearest node of (2)q i near Calculating the nearest nodeq i near Parent node and nearest node ofq i near Angle formed therebetweenθ i near And nearest nodeq i near And random nodeq i rand Angle formed therebetweenθ i randnear (ii) a The RRT is adopted to take the connection angle between the nodes into consideration when a new node is generated, so that the path generated by the RRT is more gentle.
Judgment-θ i near - θ i randnear Whether | is less than a node angle thresholdθ th If greater thanθ th Then find another nearest node until less than the node angle threshold is metθ th Then new nodeq i new The calculation method comprises the following steps:
Figure 767586DEST_PATH_IMAGE003
if it isq i near Andq i new the connecting line betweenIf there is an obstacle, it willq i near Asq i new The node of the node (c) is,q i new asq i near Then continue to randomly sample and expand the random treeT(ii) a Otherwise, abandoning the current new nodeq i new Re-randomly sampling;
search circularly, as a random treeTNew nodeq i new Expansion to reach target pointq i goal When nearby, from the target nodeq i goal Backward tracing to root node through corresponding parent nodeq start A connection is obtainedq start Toq i goal Is determined.
As a further aspect of the invention: the specific steps of adopting the multi-attribute evaluation function and setting the reasonable weight include:
cost of the lateral offsetQ d Comprises the following steps:
Figure 456669DEST_PATH_IMAGE004
cost of curvature of the candidate pathQ k Comprises the following steps:
Figure 865784DEST_PATH_IMAGE005
distance cost of the candidate path to the obstacleQ obs Comprises the following steps:
Figure 61274DEST_PATH_IMAGE006
Figure 163222DEST_PATH_IMAGE007
the total cost is:
J cost =ω 1 Q d +ω 2 Q k +ω 3 Q obs
wherein,d i the lateral offset of the discrete candidate waypoint relative to the reference path,k i as the curvature value of the discrete candidate path point,l mix and l max Respectively the minimum and maximum lateral safety distances of the obstacle relative to the candidate path,l obs,i as an obstacleiThe lateral distance with respect to the candidate path,d obs,i as an obstacleiWith respect to the distance cost of the candidate path,ω 1 、ω 2 、ω 3 the weight parameters are respectively a lateral offset cost, a candidate path curvature cost and a distance cost from the candidate path to the obstacle.
Compared with the prior art, the invention has the following technical effects:
by adopting the technical scheme, the automatic driving vehicle can obtain accurate and robust positioning action in a complex cross-country environment by extracting sensor data such as IMU, wheel type odometer, GPS, laser radar and the like and adding constraint, and is suitable for various scenes; by adopting a construction algorithm of a two-dimensional grid map, the point cloud observation of a single frame is updated into a global map through Bayesian filtering, so that the perception dead zone is obviously reduced, and the map precision is improved; the sampling is only carried out in the planned travelable area by utilizing the improved RRT algorithm, the sampling space of the RRT is simplified, and the sampling efficiency is greatly improved.
Drawings
The following detailed description of embodiments of the invention refers to the accompanying drawings in which:
FIG. 1 is a flow chart of an off-road environmental navigation method of some embodiments disclosed herein;
FIG. 2 is a block diagram of a system architecture for SLAM and navigation with multi-sensor fusion in accordance with some embodiments disclosed herein;
FIG. 3 is a graph of the results of global path planning for some embodiments disclosed herein;
fig. 4 is an environmental point cloud map constructed during SLAM according to some embodiments disclosed herein;
FIG. 5 is a constructed global occupancy grid map of some embodiments of the present disclosure;
FIG. 6 is a constructed global elevation grid map of some embodiments of the present disclosure;
FIG. 7 is a schematic view of a travelable region for an extracted current location of a vehicle according to some embodiments of the present disclosure;
fig. 8 is a flow chart of an RRT path planning algorithm of some embodiments disclosed herein;
fig. 9 is a schematic diagram of an RRT node expansion of some embodiments disclosed herein;
FIG. 10 is a schematic diagram of real-time path planning in an off-road environment in accordance with some embodiments disclosed herein.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, in an embodiment of the present invention, a cross-country environment navigation method based on multi-sensor fusion includes the following specific steps:
FIG. 2 is a block diagram illustrating the architecture of a system for SLAM and navigation with multi-sensor fusion in accordance with the present invention;
step S1, establishing a global road network based on the high-resolution satellite map, and planning a global path by using an Astar algorithm according to the coordinate information of the task file to obtain a global reference path, wherein the specific steps comprise:
firstly, a task area of an autonomous unmanned vehicle is preset, and a task map corresponding to the task area is obtained according to a picture based on a high-resolution satellite map;
selecting a reference point GPS coordinate according to the obtained task map, extracting all road information in the task map and establishing a global road network;
according to the coordinates of a starting point, a task point and an end point in the navigation task file, performing global path planning by using an Astar algorithm to obtain node topology information from the starting point to the end point in a global road network;
and then splicing the initial global path according to the topological relation, and smoothing the initial global path by using a Bezier curve fitting algorithm to obtain a global reference path in the task area, as shown in FIG. 3, which is a result diagram of global path planning.
S2, collecting vehicle-mounted multi-sensor data by using vehicle-mounted equipment to perform data fusion, inputting the fused multi-modal data into an SLAM algorithm, adding various sensor constraints, and solving by using a frame based on graph optimization to obtain an accurate pose;
in a specific embodiment, the specific step of collecting vehicle-mounted multi-sensor data by using vehicle-mounted equipment for data fusion in step S2 includes:
firstly, multi-modal data which are read are acquired in real time through a vehicle-mounted industrial personal computer, and the acquisition mode of the multi-modal data specifically comprises 3D point cloud data which are obtained by acquiring surrounding environment information through a 3D laser radar, high-frequency linear acceleration and angular velocity of a vehicle through an IMU (inertial measurement Unit), vehicle speed which is acquired through a wheel type odometer, and real-time longitude and latitude coordinates of the current vehicle through a GPS (global positioning system);
preprocessing the collected multi-modal data, wherein the preprocessing mode comprises the following steps:
3D point cloud data of the 3D laser radar: respectively calculating the curvature of each laser point, giving characteristic information of a line or a surface of each laser point according to the curvature, and clustering the point cloud to remove discrete points;
high frequency linear acceleration and angular velocity of the IMU, vehicle speed of the wheel odometer: constructing pre-integral observation through batch processing;
real-time latitude and longitude coordinates of the GPS: converting the longitude and latitude coordinates into space position coordinates;
and performing fusion processing on the preprocessed multi-modal data, and taking the fused data as the input of a subsequent SLAM algorithm.
In a specific embodiment, the step S2 of inputting the fused multi-modal data into the SLAM algorithm, adding a plurality of sensor constraints, and using a frame based on graph optimization to solve to obtain an accurate pose includes the specific steps of:
constructing a laser radar point cloud residual error, an IMU (inertial measurement Unit), a wheel type odometer pre-integration residual error and a GPS (global positioning system) prior residual error according to the obtained multi-modal data, and establishing a nonlinear least square problem;
and solving the optimized frame to obtain an accurate vehicle pose, and finally splicing the point clouds according to the vehicle pose to obtain an environment point cloud map, as shown in fig. 4, the environment point cloud map is constructed during SLAM. The rectangular boxes marked A and B in the figure are rod-shaped markers of the roadside, and the A and B in the uppermost figure are the effects of the rod-shaped markers in the global point cloud picture. Of the four lower figures, the two on the right are the shapes of the bar markers a and B in the vehicle-mounted camera, and the left are their corresponding effects in the three-dimensional point cloud.
S3, splicing a plurality of environment observations by using the obtained accurate pose, and constructing a cross-country topographic map and a global occupation raster map based on a filtering fusion algorithm, wherein the method specifically comprises the following steps:
as shown in fig. 5, a constructed global occupancy grid map is illustrated.
Acquiring single-frame laser radar point cloud data, marking obstacles according to gradients, acquiring passable area data in a single-frame grid map, deducing unobservable point cloud data by using a Gaussian process, and updating single-frame point cloud observation to a global occupancy grid map through binary Bayes filtering;
as shown in fig. 6, illustrated as a constructed global elevation grid map.
And marking the elevation data of the single-frame elevation grid map according to the maximum height, and updating the global elevation grid map by using Kalman filtering.
Step S4, extracting the current driving area of the vehicle in real time according to the obtained global reference path, and the specific steps comprise:
according to the advancing direction along the obtained global reference path, the vehicle is driven, the passable road surface area is detected according to the point cloud data, and according to the distance continuity indexc d And angle continuity indexc a Wherein discontinuous road surfaces are abandoned;
the distance continuity indexc d Comprises the following steps:
Figure 201585DEST_PATH_IMAGE001
in the formula,nthe number of points which need to be calculated before and after the current iteration point of the index,kfromx-nTox+nIndicating that the indicator takes into account the current pointp x Front sidenPoint and backnThe point of the light beam is the point,p k is a firstkThe point of the light beam is the point,p k+1 is a firstk+1 point;
the angle continuity indexc a Comprises the following steps:
Figure 148812DEST_PATH_IMAGE002
in the formula,nthe number of points which need to be calculated for the index before and after the current iteration point,p x a point representing the position of the current iteration,p x-k indicating the current iteration position is preceded bykThe point of the light beam is the point,p x+k indicating the position of the current iteration afterkPoint;
in particular, whenc d Less than a distance continuity indicator thresholdc th d And is andc a less than the angle continuity indicator thresholdc th a When it is, the ground points are connectedThen, the ground point is included in the travelable region, and the region of the extracted ground point is recorded as the travelable regionΩ 1
Considering the situation that the radar cannot directly detect all drivable pavements due to the shielding of obstacles and the like, firstly, clustering real-time 3D point clouds around a vehicle by using an Euclidean distance method to obtain obstacle clusters around the vehicle;
diffusing the front area to two sides by using a flood filling algorithm along the vertical direction of the global reference path by taking the obstacle cluster as a barrier to form a drivable area of the vehicleΩ 2
Finally, the union of the two travelable areas is calculatedΩ 1 Ω 2 As the final travelable region, as shown in fig. 7, travelable region extraction results of vehicles in straight and curved roads in the present embodiment are illustrated.
Step S5, a local optimal path is generated by using a RRT path planning algorithm based on multi-attribute evaluation in the current travelable area for obstacle avoidance navigation, and the method specifically comprises the following steps:
first, according to the travelable region obtained in step S4ΩFeasible domain as path planningΦCalculate the feasible regionΦLongitudinal length in the direction of the global reference pathL Φ Then according to travelable areaΩLength of (2)L Ω And minimum planned distanceL p,min Selecting a preview length for path planningL p WhereinL p satisfy the requirement ofL p,min L p <L Ω
Preview length according to path planningL p Finding out a corresponding global path coordinate point in the global reference path, taking the global path coordinate point as a pre-aiming point of path planning, exploring around by taking the pre-aiming point as a circle center, and calculating to obtain a feasible region in the planningΦCircle of inner maximumC p,goal
As shown in FIG. 8, which is a flow chart illustrating the RRT path planning algorithm, a random tree is initialized firstTStep size of growthρAnd using the rear axle center of the vehicle as a random treeTInitial node ofq start In the circleC p,goal Internal random selectionNThe coordinate points are used as target nodes of the random treeq goal Calculating to obtain a collision-free path;
the specific calculation steps are as follows:
as shown in fig. 9, an RRT node expansion diagram is shown, specifically, when an RRT is used to generate a new node, a connection angle between nodes is considered, so that a path generated by the RRT is smoother.
For each target nodeq i goal Planning feasible domains with specific probabilitiesΦObtaining random node by middle random samplingq i rand Selecting a random nodeq i rand Nearest node of (2)q i near Calculating the nearest nodeq i near Parent node and nearest node ofq i near Angle formed therebetweenθ i near And nearest nodeq i near And random nodeq i rand Angle formed therebetweenθ i randnear
Determination of existence ofθ i near - θ i randnear Whether | is less than a node angle thresholdθ th If greater thanθ th Then find another nearest node until less than the node angle threshold is metθ th Then new nodeq i new The calculation method comprises the following steps:
Figure 729966DEST_PATH_IMAGE003
if it isq i near Andq i new there is no obstacle in the connecting line between, then willq i near Asq i new The node of the node (c) is,q i new asq i near Then continue to randomly sample and expand the random treeT(ii) a Otherwise, abandoning the current new nodeq i new Re-randomly sampling;
search in cycles, as random treesTNew nodeq i new Expansion to reach target pointq i goal When nearby, from the target nodeq i goal Backward tracing to root node through corresponding parent nodeq start A connection is obtainedq start Toq i goal Is determined.
Based on the random tree corresponding to each path, taking the nodes of the random tree as control points of a Bezier curve, and smoothing by using the Bezier curve to obtain a smooth path, whereinNThe paths form a candidate path cluster; the path is smoothed by using a curve fitting algorithm, and the quality of the path is effectively optimized.
And for each candidate path, selecting the path with the minimum cost as the current optimal path by adopting a multi-attribute evaluation function and setting a reasonable weight, wherein the multi-attribute evaluation function comprises the cost of transverse deviation, the cost of curvature of the candidate path and the distance cost from the candidate path to the obstacle.
As shown in fig. 10, a schematic diagram of real-time path planning in an off-road environment in the present embodiment is shown.
Specifically, the specific steps of adopting the multi-attribute evaluation function and setting a reasonable weight include:
cost of the lateral offsetQ d Comprises the following steps:
Figure 737237DEST_PATH_IMAGE008
Q d the cost of the lateral deviation represents the deviation degree of the candidate path relative to the reference path;
cost of curvature of the candidate pathQ k Comprises the following steps:
Figure 138262DEST_PATH_IMAGE009
Q k selecting a path with the minimum average curvature for the cost of the curvature of the candidate path;
distance cost of the candidate path to the obstacleQ obs Comprises the following steps:
Figure 13814DEST_PATH_IMAGE010
Figure 446545DEST_PATH_IMAGE007
the total cost is:
J cost =ω 1 Q d +ω 2 Q k +ω 3 Q obs
wherein,d i the lateral offset of the discrete candidate waypoint relative to the reference path,k i as the curvature value of the discrete candidate path point,l mix and l max Respectively the minimum and maximum lateral safety distances of the obstacle relative to the candidate path,l obs,i as an obstacleiThe lateral distance with respect to the candidate path,d obs,i as an obstacleiWith respect to the distance cost of the candidate path,ω 1 、ω 2 、ω 3 the weight parameters are respectively a lateral offset cost, a candidate path curvature cost and a distance cost from the candidate path to the obstacle.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents, which should be construed as being within the scope of the invention.

Claims (8)

1. A cross-country environment navigation method based on multi-sensor fusion is characterized by comprising the following specific steps:
s1, establishing a global road network based on the high-resolution satellite map, and performing global path planning by using an Astar algorithm according to the coordinate information of the task file to obtain a global reference path;
s2, collecting vehicle-mounted multi-sensor data by using vehicle-mounted equipment to perform data fusion, inputting the fused multi-modal data into an SLAM algorithm, adding various sensor constraints, and solving by using a frame based on graph optimization to obtain an accurate pose;
s3, splicing a plurality of environment observations by using the obtained accurate pose, and constructing a cross-country topographic map and a global occupancy grid map based on a filtering fusion algorithm;
step S4, extracting the current driving area of the vehicle in real time according to the obtained global reference path, wherein the method comprises the following specific steps:
the vehicle is driven according to the obtained global reference path, a passable road surface area is detected according to the point cloud data, and discontinuous road surfaces are abandoned according to the distance continuity index and the angle continuity index;
the distance continuity indexc d Comprises the following steps:
Figure 343136DEST_PATH_IMAGE001
in the formula,nthe number of points which need to be calculated before and after the current iteration point of the index,kfromx-nTox+nIndicating that the indicator takes into account the current pointp x Front sidenPoint and backnThe point of the light beam is the point,p k is as followskThe point of the light beam is the point,p k+1 is as followsk+1 point;
the angle continuity indexc a Comprises the following steps:
Figure 791435DEST_PATH_IMAGE002
in the formula,nthe number of points which need to be calculated for the index before and after the current iteration point,p x the point representing the position of the current iteration,p x-k indicating the current iteration position is preceded bykThe point of the light beam is the point,p x+k indicating the position of the current iteration afterkPoint;
when in usec d Less than a distance continuity indicator thresholdc th d And is andc a less than the angle continuity indicator thresholdc th a And then, the ground points are continuous, the ground points are contained in the travelable region, and the region of the extracted road points is recorded as the travelable regionΩ 1
Because the radar cannot directly detect all drivable road surfaces, firstly clustering real-time 3D point clouds around the vehicle by using an Euclidean distance method to obtain obstacle clusters around the vehicle; diffusing the front area to two sides by using a flood filling algorithm along the vertical direction of the global reference path by taking the obstacle cluster as a barrier to form a drivable area of the vehicleΩ 2
Finally calculating two travelable vehiclesUnion of regionsΩ 1 Ω 2 As a final travelable region;
and step S5, using an RRT path planning algorithm based on multi-attribute evaluation to generate a local optimal path in the current travelable area for obstacle avoidance navigation.
2. The off-road environment navigation method based on multi-sensor fusion of claim 1, wherein the specific steps of the step S1 include:
firstly, a task area of an autonomous unmanned vehicle is preset, and a task map corresponding to the task area is obtained according to a picture based on a high-resolution satellite map;
selecting a reference point GPS coordinate according to the obtained task map, extracting all road information in the task map and establishing a global road network;
according to the coordinates of a starting point, a task point and an end point in the navigation task file, performing global path planning by using an Astar algorithm to obtain node topology information from the starting point to the end point in a global road network;
and splicing the initial global path according to the topological relation, and smoothing the initial global path by using a Bezier curve fitting algorithm to obtain a global reference path in the task area.
3. An off-road environment navigation method based on multi-sensor fusion as claimed in claim 1, wherein the specific steps of collecting vehicle-mounted multi-sensor data by vehicle-mounted equipment for data fusion in step S2 include:
the method comprises the steps that read multi-modal data are collected in real time through a vehicle-mounted industrial personal computer, the multi-modal data comprise 3D point cloud data obtained by collecting surrounding environment information through a 3D laser radar, high-frequency linear acceleration and angular velocity of a vehicle are collected through an IMU, vehicle speed is collected through a wheel type odometer, and real-time longitude and latitude coordinates of the current vehicle are collected through a GPS;
respectively calculating the curvature of each laser point, giving characteristic information of a line or a plane of the laser point according to the curvature, and clustering point clouds to eliminate discrete points; constructing pre-integral observation by batch processing of the acceleration and angular velocity acquired by the IMU and the vehicle speed data acquired by the wheel type odometer; converting the real-time longitude and latitude coordinates into space position coordinates;
and performing fusion processing on the preprocessed multi-modal data.
4. The off-road environment navigation method based on multi-sensor fusion of claim 1, wherein the fused multi-modal data is input into SLAM algorithm in step S2, and various sensor constraints are added, and the specific steps of obtaining accurate pose by using graph optimization-based framework solution include:
constructing a laser radar point cloud residual error, an IMU (inertial measurement Unit), a wheel type odometer pre-integration residual error and a GPS (global positioning system) prior residual error according to the obtained multi-modal data, and establishing a nonlinear least square problem;
and finally, splicing the point clouds according to the vehicle pose to obtain an environment point cloud map.
5. The off-road environment navigation method based on multi-sensor fusion according to claim 1, wherein the specific steps of step S3 include:
acquiring single-frame laser radar point cloud data, marking obstacles according to gradients, acquiring passable area data in a single-frame grid map, deducing unobservable point cloud data by using a Gaussian process, and updating single-frame point cloud observation to a global occupancy grid map through binary Bayes filtering;
and marking the elevation data of the single-frame elevation grid map according to the maximum height, and updating the global elevation grid map by using Kalman filtering.
6. The off-road environment navigation method based on multi-sensor fusion according to claim 1, wherein the specific steps of step S5 include:
according to the result ofAvailable driving areaΩFeasible domain as path planningΦCalculate the feasible regionΦLongitudinal length in the direction of the global reference pathL Φ Then according to the travelable regionΩLength of (2)L Ω And minimum planned distanceL p,min Selecting a preview length for path planningL p WhereinL p,min L p <L Ω
preview length according to path planningL p Finding out a corresponding global path coordinate point in the global reference path, taking the coordinate point as a pre-aiming point of path planning, exploring around by taking the pre-aiming point as a circle center, and calculating to obtain a target coordinate point in a planning feasible regionΦCircle of inner maximumC p,goal
Initializing random treesTStep size of growthρTaking the center of the rear axle of the vehicle as a random treeTInitial node ofq start In the circleC p,goal Internal random selectionNThe coordinate points are used as target nodes of the random treeq goal Calculating to obtain a collision-free path;
based on the random tree corresponding to each path, taking nodes of the random tree as control points of a Bezier curve, and smoothing by using the Bezier curve to obtain a smooth path;
and for each candidate path, selecting the path with the minimum cost as the current optimal path by adopting a multi-attribute evaluation function and setting a reasonable weight, wherein the multi-attribute evaluation function comprises the cost of transverse deviation, the cost of curvature of the candidate path and the distance cost from the candidate path to the obstacle.
7. An off-road environment navigation method based on multi-sensor fusion as claimed in claim 6, wherein the target node of the random treeq goal The specific steps of calculating to obtain a collision-free path include:
for eachA target nodeq i goal Planning feasible domains with specific probabilitiesΦObtaining random node by middle random samplingq i rand Selecting a random nodeq i rand Of the nearest nodeq i near Calculating the nearest nodeq i near Parent node and nearest node ofq i near Angle formed therebetweenθ i near And nearest nodeq i near And random nodeq i rand Angle formed therebetweenθ i randnear
Judgment-θ i near - θ i randnear Whether | is less than a node angle thresholdθ th If greater thanθ th Then find another nearest node until less than the node angle threshold is metθ th Then new nodeq i new The calculation method comprises the following steps:
Figure 712117DEST_PATH_IMAGE003
if it isq i near Andq i new there is no obstacle in the connecting line between, then willq i near Asq i new The node of the node (c) is,q i new asq i near Then continue to randomly sample and expand the random treeT(ii) a Otherwise, abandoning the current new nodeq i new Re-randomly sampling;
search in cycles, as random treesTNew nodeq i new Expansion to reach target pointq i goal When nearby, from the target nodeq i goal Backward tracing to root node through corresponding parent nodeq start A connection is obtainedq start Toq i goal Is determined.
8. The off-road environment navigation method based on multi-sensor fusion of claim 6, wherein the specific steps of adopting the multi-attribute evaluation function and setting reasonable weight value include:
cost of the lateral offsetQ d Comprises the following steps:
Figure 372906DEST_PATH_IMAGE004
cost of curvature of the candidate pathQ k Comprises the following steps:
Figure 452857DEST_PATH_IMAGE005
distance cost of the candidate path to the obstacleQ obs Comprises the following steps:
Figure 540899DEST_PATH_IMAGE006
Figure 214457DEST_PATH_IMAGE007
the total cost is:
J cost =ω 1 Q d +ω 2 Q k +ω 3 Q obs
wherein,d i the lateral offset of the discrete candidate waypoint relative to the reference path,K max is a threshold value for the curvature of the discrete trace points,k i as the curvature value of the discrete candidate path point,l min and l max Respectively the minimum and maximum lateral safety distances of the obstacle relative to the candidate path,l obs,i as an obstacleiThe lateral distance with respect to the candidate path,d obs,i as an obstacleiWith respect to the distance cost of the candidate path,ω 1 、ω 2 、ω 3 the weight parameters are respectively a lateral offset cost, a candidate path curvature cost and a candidate path-to-obstacle distance cost.
CN202210714299.1A 2022-06-23 2022-06-23 Cross-country environment navigation method based on multi-sensor fusion Active CN114812581B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210714299.1A CN114812581B (en) 2022-06-23 2022-06-23 Cross-country environment navigation method based on multi-sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210714299.1A CN114812581B (en) 2022-06-23 2022-06-23 Cross-country environment navigation method based on multi-sensor fusion

Publications (2)

Publication Number Publication Date
CN114812581A CN114812581A (en) 2022-07-29
CN114812581B true CN114812581B (en) 2022-09-16

Family

ID=82521040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210714299.1A Active CN114812581B (en) 2022-06-23 2022-06-23 Cross-country environment navigation method based on multi-sensor fusion

Country Status (1)

Country Link
CN (1) CN114812581B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115399950B (en) * 2022-08-30 2024-08-02 中国科学院沈阳自动化研究所 Intelligent wheelchair with positioning navigation and multi-mode man-machine interaction functions and control method
CN116338729A (en) * 2023-03-29 2023-06-27 中南大学 Three-dimensional laser radar navigation method based on multilayer map
CN116147653B (en) * 2023-04-14 2023-08-22 北京理工大学 Three-dimensional reference path planning method for unmanned vehicle
CN116448118B (en) * 2023-04-17 2023-10-31 深圳市华辰信科电子有限公司 Working path optimization method and device of sweeping robot
CN116501048B (en) * 2023-04-26 2023-09-12 无锡卡尔曼导航技术有限公司南京技术中心 Self-mobile equipment ground penetrating path planning method
CN116592871B (en) * 2023-04-28 2024-04-23 连云港杰瑞科创园管理有限公司 Unmanned ship multi-source target information fusion method
CN116909268B (en) * 2023-06-30 2024-05-28 广东省机场管理集团有限公司工程建设指挥部 5G-based path planning method, device, equipment and medium for walking robot
CN117330081B (en) * 2023-11-08 2024-05-10 广东拓普视科技有限公司 Perception navigation device and method based on robot
CN117553820B (en) * 2024-01-12 2024-04-05 北京理工大学 Path planning method, system and equipment in off-road environment
CN117571012B (en) * 2024-01-15 2024-03-15 北京理工大学 Global path planning method, system and equipment for unmanned vehicle in off-road environment
CN117649584B (en) * 2024-01-30 2024-05-10 中国地质大学(武汉) Method, system, storage medium and equipment for evaluating capability of off-road vehicle
CN118010009B (en) * 2024-04-10 2024-06-11 北京爱宾果科技有限公司 Multi-mode navigation system of educational robot in complex environment
CN118640925B (en) * 2024-08-16 2024-10-15 贵州大学 Method and device for planning path of automatic driving vehicle in off-road environment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106767853A (en) * 2016-12-30 2017-05-31 中国科学院合肥物质科学研究院 A kind of automatic driving vehicle high-precision locating method based on Multi-information acquisition
CN111780777A (en) * 2020-07-13 2020-10-16 江苏中科智能制造研究院有限公司 Unmanned vehicle route planning method based on improved A-star algorithm and deep reinforcement learning
CN112525202A (en) * 2020-12-21 2021-03-19 北京工商大学 SLAM positioning and navigation method and system based on multi-sensor fusion
CN112902953A (en) * 2021-01-26 2021-06-04 中国科学院国家空间科学中心 Autonomous pose measurement method based on SLAM technology
CN113467456A (en) * 2021-07-07 2021-10-01 中国科学院合肥物质科学研究院 Path planning method for specific target search in unknown environment
CN114185352A (en) * 2021-12-08 2022-03-15 东风悦享科技有限公司 High-precision high-real-time automatic driving local path planning method
CN114371703A (en) * 2021-12-22 2022-04-19 杭州鸿泉物联网技术股份有限公司 Unmanned vehicle track prediction method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160013713A (en) * 2014-07-28 2016-02-05 현대자동차주식회사 Global path generation apparatus for autonomous vehicle and method thereof
US20210404814A1 (en) * 2020-06-30 2021-12-30 Lyft, Inc. Map Generation Using Two Sources of Sensor Data

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106767853A (en) * 2016-12-30 2017-05-31 中国科学院合肥物质科学研究院 A kind of automatic driving vehicle high-precision locating method based on Multi-information acquisition
CN111780777A (en) * 2020-07-13 2020-10-16 江苏中科智能制造研究院有限公司 Unmanned vehicle route planning method based on improved A-star algorithm and deep reinforcement learning
CN112525202A (en) * 2020-12-21 2021-03-19 北京工商大学 SLAM positioning and navigation method and system based on multi-sensor fusion
CN112902953A (en) * 2021-01-26 2021-06-04 中国科学院国家空间科学中心 Autonomous pose measurement method based on SLAM technology
CN113467456A (en) * 2021-07-07 2021-10-01 中国科学院合肥物质科学研究院 Path planning method for specific target search in unknown environment
CN114185352A (en) * 2021-12-08 2022-03-15 东风悦享科技有限公司 High-precision high-real-time automatic driving local path planning method
CN114371703A (en) * 2021-12-22 2022-04-19 杭州鸿泉物联网技术股份有限公司 Unmanned vehicle track prediction method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Jean-François Tremblay 等.Multimodal dynamics modeling for off-road autonomous vehicles.《2021 IEEE International Conference on Robotics and Automation (ICRA)》.2021, *
地面无人车辆越野环境多要素合成可通行区域检测;肖强;《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》;20160315(第03期);第C034-706页 *
基于多传感器信息融合的无人车导航系统设计;罗亚萍等;《兰州工业学院学报》;20200430;第27卷(第02期);第71-76页 *
复杂环境下基于RRT的智能车辆运动规划算法;杜明博等;《机器人》;20151231;第37卷(第004期);第443-450页 *

Also Published As

Publication number Publication date
CN114812581A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN114812581B (en) Cross-country environment navigation method based on multi-sensor fusion
CN108983781B (en) Environment detection method in unmanned vehicle target search system
Sun et al. A 3D LiDAR data-based dedicated road boundary detection algorithm for autonomous vehicles
Wang et al. Map-based localization method for autonomous vehicles using 3D-LIDAR
US11790668B2 (en) Automated road edge boundary detection
Weng et al. Pole-based real-time localization for autonomous driving in congested urban scenarios
CN108369420B (en) Apparatus and method for autonomous positioning
JP6197393B2 (en) Lane map generation device and program
US8755997B2 (en) Laser ranging process for road and obstacle detection in navigating an autonomous vehicle
CN110832279A (en) Aligning data captured by autonomous vehicles to generate high definition maps
Hervieu et al. Road side detection and reconstruction using LIDAR sensor
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
Li et al. Road DNA based localization for autonomous vehicles
US11754415B2 (en) Sensor localization from external source data
WO2023092451A1 (en) Method and apparatus for predicting drivable lane
Fu et al. An efficient scan-to-map matching approach for autonomous driving
Guo et al. Occupancy grid based urban localization using weighted point cloud
Park et al. Vehicle localization using an AVM camera for an automated urban driving
Fassbender et al. Landmark-based navigation in large-scale outdoor environments
CN116929363A (en) Mining vehicle autonomous navigation method based on passable map
CN113227713A (en) Method and system for generating environment model for positioning
Chipka et al. Estimation and navigation methods with limited information for autonomous urban driving
JP6837626B1 (en) Feature data generation system, feature database update system, and feature data generation method
Leidenfrost et al. Autonomous navigation of forest trails by an industrial-size robot
Liu et al. Campus guide: A lidar-based mobile robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant