CN116576857A - Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar - Google Patents

Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar Download PDF

Info

Publication number
CN116576857A
CN116576857A CN202310420891.5A CN202310420891A CN116576857A CN 116576857 A CN116576857 A CN 116576857A CN 202310420891 A CN202310420891 A CN 202310420891A CN 116576857 A CN116576857 A CN 116576857A
Authority
CN
China
Prior art keywords
obstacle
point cloud
robot
kalman
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310420891.5A
Other languages
Chinese (zh)
Inventor
姜杨
王进岩
梁亮
赵彬
孙若怀
Original Assignee
东北大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东北大学 filed Critical 东北大学
Priority to CN202310420891.5A priority Critical patent/CN116576857A/en
Publication of CN116576857A publication Critical patent/CN116576857A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Probability & Statistics with Applications (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a multi-obstacle prediction navigation obstacle avoidance method based on a single-line laser radar, which comprises the steps of firstly collecting obstacle point cloud data detected by the single-line laser radar, filtering and classifying the data to obtain obstacle points Yun Shuzu; performing shape fitting on the classified obstacle point cloud array to detect obstacles; performing data association on the detected obstacles in the adjacent two frames of obstacle point cloud arrays to obtain the current motion state of each obstacle and recording the historical track of each obstacle to realize the tracking of the obstacle; predicting the track of a plurality of obstacles in a future period of time to read the result of the global path planner and divide the result into a plurality of sections of local paths; realizing map-free end-to-end strategy navigation in each section of local path; the method is applied to the mobile robots such as the floor sweeping robot, the service robot, the AGV and the like, improves the safety of robot obstacle avoidance, and has the characteristics of real-time performance and low cost.

Description

Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar
Technical Field
The invention belongs to the field of multi-obstacle tracking prediction and mobile robot navigation obstacle avoidance, and particularly relates to a multi-obstacle prediction navigation obstacle avoidance method based on a single-line laser radar.
Background
In recent years, the development of deep learning and robot technology has promoted the progress in the fields of logistics warehouse, automatic driving, intelligent home, unmanned aerial vehicle and the like. As application scenes become more complex, the problem that the robot cannot avoid fast moving obstacles is increasingly highlighted. For example, in automatic driving, the vehicle needs to detect other surrounding vehicles, pedestrians and obstacles, and simultaneously needs to predict the motion trail of the moving obstacle in a future period of time, so as to realize safe motion control at the current moment.
In an industrial scenario, an AGV robot often encounters multiple workers walking when handling goods, needs to avoid in time and does not affect the operation progress. In intelligent monitoring, the motion states of vehicles and pedestrians need to be detected, potential safety hazards possibly existing are analyzed, and early warning is timely carried out. The prediction of the obstacle is involved in all the scenes, and the complete obstacle prediction comprises three parts of obstacle detection, obstacle tracking and obstacle prediction, however, most researchers usually assume that the historical track and the motion state of the obstacle are known in the research process at present, and only the track prediction part is researched, so that although the obstacle prediction research is very hot, a complete obstacle prediction algorithm framework suitable for the field of mobile robots is not available at any moment. The obstacle prediction comprises three parts of obstacle detection, obstacle tracking and track prediction. A difficulty in the application of robotics in dynamic obstacle environments is the prediction of dynamic obstacles. However, in the traditional research, it is generally assumed that the motion state and the historical track of the dynamic obstacle are known, the prediction is only performed on the track, and the complete obstacle prediction in the mobile robot needs three parts of obstacle detection, obstacle tracking and track prediction.
The sensor division and obstacle prediction are generally divided into an algorithm taking an image of a depth camera as original data and an algorithm taking a point cloud of a laser radar as original data. In image data, the method relates to target detection and target tracking with higher research heat at present, but an algorithm taking the image data as a processing object generally only tracks a detection frame in the aspect of obstacle tracking, depth information cannot be acquired, parallax images can be used for making up, but errors are larger, the parallax images cannot be used for tracking and predicting real position information of an obstacle space, and moreover, the method is time-consuming in training and slow in deployment due to the adoption of a deep learning network model, has high calculation force requirements, and cannot be applied to a mobile robot. In addition, depth cameras are also susceptible to illumination. Most of the lidars adopt 3D lidar data and network models, and are only suitable for automatic driving scenes and cannot be applied to mobile robots.
In addition, when the traditional ROS navigation encounters a dynamic obstacle with a relatively high moving speed, the robot makes an erroneous planning decision due to the fact that the local cost map cannot be updated in time, and due to the fact that an obstacle prediction module is lacking, when the obstacle is very close to the robot, the robot can do obstacle avoidance actions on the obstacle, a moving path needs to be continuously planned again, the calculation resources of a processor are occupied, and the driving safety and the driving flexibility of the robot are reduced. Therefore, when a robot performs a task in a complex, dynamic obstacle-existing environment, the task requirements are often not satisfied.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a multi-obstacle prediction navigation obstacle avoidance method based on a single-line laser radar, which can realize the navigation obstacle avoidance of a mobile robot in a dynamic obstacle environment under the requirements of real-time performance and low cost.
A multi-obstacle prediction navigation obstacle avoidance method based on a single-line laser radar specifically comprises the following steps:
step 1: acquiring obstacle point cloud data detected by a single-line laser radar through an ROS (reactive oxygen species) functional package of the laser radar in a Ubuntu system, filtering the point cloud data, carrying out coordinate system transformation on the filtered point cloud data, and converting the coordinate system transformation into a global coordinate system; classifying the point cloud data converted to the global coordinates by using a clustering algorithm to finish the clustering of the whole frame of point cloud data and obtain barrier points Yun Shuzu;
step 1.1: in the time dimension and the space dimension, the point cloud data is subjected to the moving average filtering with a window of 3, specifically:
wherein P is t,i Representing the coordinates of the ith point in the global map at the time t;
based on the ROS global map, the coordinates (X i ,Y i ) Adding the displacement of the trolley relative to the global map, and calculating the coordinates of the point i under the global map, wherein the coordinates are specifically as follows:
X i =Rcosθ+dx
Y i =Rsinθ+dy
dx and dy in the formula are obtained by odometer information, and the odometer information is obtained by fusion filtering of an IMU and an encoder; r is the distance of the point i relative to the laser radar, and θ is the offset angle of the point i relative to the 0 DEG line of the laser radar;
step 1.2: performing global rough segmentation on the filtered data converted to global coordinates obtained in the step 1.1, classifying the segmented point cloud clusters by using a DBSCAN algorithm based on density clustering, and finally completing the clustering of the whole frame of data to obtain an obstacle array; the method comprises the following steps:
directly clustering the filtered obstacle point cloud data according to Euclidean distance characteristics, and firstly, performing continuous point cloud scanning on the laser radar data before clustering because a clustering algorithm needs higher computing resourcesSequentially dividing continuations, wherein for each laser point cloud block, the index of a starting point p meets R i-1 =0 and R i Not equal to 0, the end point q satisfies R i Not equal to 0 and R i+1 =0; wherein the sequential segmentation algorithm comprises the following steps:
firstly, calculating Euclidean distance between two adjacent points in obstacle point cloud data; if the Euclidean distance is smaller than the threshold value T, the point cloud clusters belong to the same point cloud cluster; the threshold T is set according to the environment; adding the Euclidean distance between two adjacent points smaller than T into the current point cloud cluster C k In (a) and (b); if the Euclidean distance is greater than a certain threshold T, the point cloud clusters are not considered to belong to the same point cloud cluster; adding P to current point cloud cluster i New construction point cloud cluster C k+1 Will P i+1 Added to C k+! In (a) and (b); repeating the operation until the obstacle point cloud data of the current frame are processed, and obtaining a final processing result for a point cloud clustering algorithm;
finally, classifying the point cloud cluster arrays after sequential segmentation through a DBSCAN algorithm of density clustering, and finally completing the clustering of the whole frame of data to obtain barrier points Yun Shuzu;
step 2: performing rectangular fitting on the obstacle point cloud array obtained after the clustering classification in the step 1 to detect the obstacle, obtaining the position and shape information of the obstacle, and storing the position and shape information in the obstacle array;
step 2.1: under the global coordinate system, carrying out extremum searching of coordinate values on the obstacle point cloud array to obtain the maximum value x of each point in the obstacle point cloud array on the x axis i And a minimum value x j And a maximum y on the y-axis i And a minimum value y j The expression is:
P1=(x i ,y i ),P2=(x i ,y j ),P3=(x j ,y i ),P4=(x j ,y j )
{P1,P2,P3,P4|x i =max(x),y i =max(y),x j =min(x),y j =min(x)}
step 2.2: drawing a rectangular frame by utilizing four extreme points P1, P2, P3 and P4, fitting the appearance of the obstacle, and removing the obstacle at the wall edge; the removing of the wall side barriers is specifically as follows:
let the aspect ratio threshold value of judging whether the barrier is a wall side be Z, the long side of rectangle be L, the minor side be S, the aspect ratio be λ, then:
when the length-width ratio lambda is larger than the threshold value T, the obstacle is considered as a wall, and the point cloud forming the obstacle is removed; in practical cases, small obstacles may also meet the condition, constraint is performed by adding a limiting condition that the value of the long side is greater than 3 meters, and finally the lambda expression is as follows:
step 3: performing data association on the obstacles detected by the obstacle point cloud data in the adjacent two-frame obstacle arrays to obtain the current motion state of each obstacle and record the historical track of each obstacle so as to realize the tracking of the obstacle;
the association of the obstacle data, the current motion state of each obstacle and the record of the history track thereof are specifically as follows:
step 3.1: initializing an obstacle array to be empty, defining a Kalman object when the 1 st frame of data is acquired, predicting the position of the obstacle of the next frame, and initializing the Kalman object array; calculating to obtain the track of the obstacle by using the position information in the array of the obstacle of the two adjacent frames before and after, initializing the track just generated by the KF filter and marking the track as 'uncertain', marking the track with uncertainty as 'determined' after the two subsequent frames are matched and confirmed again, and updating the life cycle from uncertainty to certainty;
step 3.2: when the 2 nd frame data is processed, carrying out association matching on the observed value of the current frame and the predicted value of the Kalman object of the previous frame; the method comprises the steps that a bipartite graph is formed by utilizing a determined track and an observed value obtained by an obstacle detector for matching, wherein the matching is carried out by using a KNN algorithm, a Hungary algorithm or a KM algorithm, and matching results are divided into a Kalman object and the observed value which are successfully matched, a Kalman object which is not successfully matched and 3 types of observed values which are not successfully matched;
step 3.3: updating the last frame of Kalman object which is not successfully matched, and deleting the Kalman object from the Kalman array and the obstacle array if a termination condition is triggered; the method for setting the termination condition is that the Kalman object is successfully matched for 3 continuous frames, the life cycle is changed from 'determination' to 'uncertainty', and the Kalman object marked as 'uncertainty' is deleted from the obstacle array;
step 3.4: initializing a Kalman object for the current frame observation value which is not successfully matched and adding the current frame observation value into a Kalman array;
step 3.5: carrying out Kalman updating on the Kalman object successfully matched by utilizing the observed value of the current frame, triggering an obstacle condition, and adding the obstacle condition into an obstacle array;
step 3.6: updating the obstacle array to obtain the movement state of the obstacle;
step 3.7: repeating the steps 3.2 to 3.6 to update the obstacle information in real time;
step 4: predicting the track of a plurality of obstacles in a future period of time by using a constant speed model based on the step 3;
step 4.1: selecting the length of a predicted track according to the complexity of the scene; the scene complexity is determined according to the number of pedestrians and the motion state of environmental barriers;
step 4.2: for track short-order prediction, selecting a prediction mode based on a motion model;
step 4.3: for long-order track prediction, selecting a prediction mode based on a network model, wherein the long-order track prediction is only started when the short-order prediction is severely distorted;
step 5: reading the result of the ROS global path planner and dividing the result into a plurality of sections of local paths;
step 6: realizing map-free end-to-end strategy navigation in each section of local path obtained in the step 5;
step 6.1: reading the barrier information acquired in the step;
step 6.2: reading global path planner information;
step 6.3: dividing the track in the global path planner information and planning a starting point;
step 6.4: judging whether the obstacle passes through a front road section, calculating an meeting position by utilizing a constant speed model through the current motion state of the obstacle and the motion state of the robot according to the characteristics of Newton's first law, and if no meeting place exists, selecting a straight run by the mobile robot; the robot is a moving robot;
step 6.5: judging whether the obstacle is within the safe distance, if so, proceeding straight, otherwise, performing strategy planning;
by calculating the current obstacle (x i ,y i ) The corresponding obstacle avoidance strategy is formulated in the area; wherein when (x i ,y i ) A epsilon [0.5,1 when being positioned in front of a rectangular area with a width of a meter and a length of b meters of the mobile robot],b∈[2,4]The robot adopts an execution strategy of backing a/fs at a high speed fm/s, f is 2,3]The method comprises the steps of carrying out a first treatment on the surface of the When (x i ,y i ) When the robot is positioned in a semi-elliptical area with a coordinate point (0, a) as a central major axis and a minor axis of b meters and 2a meters, the robot adopts a stopping strategy; when (x i ,y i ) The robot takes f/4m/s low-speed advance when the robot is positioned in a circular area with the radius of c meters and the radius of c E [5,8 ] except the area by taking the robot as the circle center];
When no obstacle exists in the detection area, the robot advances by f/2 m/s; when the stop time of the robot in the elliptical area is longer than 2s, the robot takes detouring, firstly judging whether the robot can turn right to detour to pass or not according to the direction vector of the obstacle, if the robot can not pass the detour to turn right, turning left, wherein the steering angular speed is 0.017rad/s each time, and the time is 1s; the region, the speed and the time can be correspondingly adjusted according to different environments so as to achieve the best effect;
step 6.6: judging whether the right turn can avoid the obstacle, if so, turning right, otherwise, turning left;
step 6.7: and (5) repeating the steps 6.1-6.6 to realize end-to-end strategy navigation.
The invention has the beneficial technical effects that:
the invention solves the problem that the local cost map can not be updated in time when the traditional ROS navigation system encounters a fast moving obstacle, provides a navigation obstacle avoidance scheme which can give consideration to low cost and real-time performance for the mobile robot in obstacle tracking prediction, has fewer algorithm parameters, does not need training compared with a network model for deep learning, does not depend on a server with high calculation power, overcomes the defect that the depth camera is easily affected by illumination, and simultaneously reduces the hardware cost of the system compared with a 3D laser radar algorithm because a single-line laser radar is adopted as a sensor for data acquisition. The robot obstacle avoidance system is applied to mobile robots such as a sweeping robot, a service robot, an AGV and the like, can improve the safety of robot obstacle avoidance, and has the characteristics of real-time performance and low cost.
Drawings
FIG. 1 is an algorithm framework of a multi-obstacle predictive navigation obstacle avoidance method based on a single-line laser radar in an embodiment of the invention;
fig. 2 is a schematic diagram of converting a laser radar point cloud into a global coordinate system according to an embodiment of the present invention, wherein a is a schematic diagram of the global map coordinate system, and b is a schematic diagram of the laser radar coordinate system;
FIG. 3 is a schematic diagram of sequential partitioning of a point cloud cluster according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a motion state and its historical track based on obstacle prediction according to an embodiment of the invention;
FIG. 5 is a flowchart of an end-to-end policy navigation according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a strategy planning scenario division according to an embodiment of the present invention;
FIG. 7 is a diagram showing a comparison of a conventional navigation method according to an embodiment of the present invention and a planned path according to the present invention; wherein, the figure a is a traditional navigation method, and the figure b is a method of the invention.
Detailed Description
The invention is further described below with reference to the drawings and examples;
a multi-obstacle prediction navigation obstacle avoidance method based on a single-line laser radar is shown in the attached figure 1, and specifically comprises the following steps:
step 1: acquiring obstacle point cloud data detected by a single-line laser radar through an ROS (reactive oxygen species) functional package of the laser radar in a Ubuntu system, filtering the point cloud data, carrying out coordinate system transformation on the filtered point cloud data, converting the point cloud data into a global coordinate system, classifying the point cloud data converted into the global coordinate system by using a clustering algorithm as shown in a figure 2, and completing clustering of the whole frame of point cloud data to obtain obstacle points Yun Shuzu;
step 1.1: in the time dimension and the space dimension, the point cloud data is subjected to the moving average filtering with a window of 3, specifically:
wherein P is t,i Representing the coordinates of the ith point in the global map at the time t;
based on the ROS global map, the coordinates (X i ,Y i ) Adding the displacement of the trolley relative to the global map, and calculating the coordinates of the point i under the global map, wherein the coordinates are specifically as follows:
X i =Rcosθ+dx
Y i =Rsinθ+dy
dx and dy in the formula are obtained by odometer information, and the odometer information is obtained by fusion filtering of an IMU and an encoder; r is the distance of the point i relative to the laser radar, and θ is the offset angle of the point i relative to the 0 DEG line of the laser radar;
step 1.2: performing global rough segmentation on the filtered data converted to global coordinates obtained in the step 1.1, classifying the segmented point cloud clusters by using a DBSCAN algorithm based on density clustering, and finally completing the clustering of the whole frame of data to obtain an obstacle array; the method comprises the following steps:
the method comprises the steps of directly clustering a frame of obstacle point cloud data obtained after filtering according to Euclidean distance characteristics, and sequentially segmenting laser radar data according to the continuity of point cloud scanning before clustering because a clustering algorithm needs higher computing resources, wherein the method comprises the steps ofIn each laser point cloud block, the index of the starting point p meets R i-1 =0 and R i Not equal to 0, the end point q satisfies R i Not equal to 0 and R i+1 =0; the sequential segmentation algorithm, as shown in fig. 3, comprises the following steps:
firstly, calculating Euclidean distance between two adjacent points in obstacle point cloud data; if the Euclidean distance is smaller than the threshold value T, the point cloud clusters belong to the same point cloud cluster; the threshold T is set according to the environment, typically 0.1 to 0.3 meters; adding the Euclidean distance between two adjacent points smaller than T into the current point cloud cluster C k In (a) and (b); if the Euclidean distance is greater than a certain threshold T, the point cloud clusters are not considered to belong to the same point cloud cluster; adding P to current point cloud cluster i New construction point cloud cluster C k+1 Will P i+1 Added to C k+! In (a) and (b); repeating the operation until the obstacle point cloud data of the current frame are processed, and obtaining a final processing result for a point cloud clustering algorithm;
finally, classifying the point cloud cluster arrays after sequential segmentation through a DBSCAN algorithm of density clustering, and finally completing the clustering of the whole frame of data to obtain barrier points Yun Shuzu;
step 2: performing rectangular fitting on the obstacle array obtained after the clustering classification in the step 1 to detect the obstacle, obtaining the position and shape and size information of the obstacle, and storing the information in the obstacle array;
step 2.1: under the global coordinate system, searching the extreme value of the coordinate value of the obstacle array to obtain the maximum value x of each point in the obstacle point cloud array on the x axis i And a minimum value x j And a maximum y on the y-axis i And a minimum value y j The expression is:
P1=(x i ,y i ),P2=(x i ,y j ),P3=(x j ,y i ),P4=(x j ,y j )
{P1,P2,P3,P4|x i =max(x),y i =max(y),x j =min(x),y j =min(x)}
step 2.2: drawing a rectangular frame by utilizing four extreme points P1, P2, P3 and P4, fitting the appearance of the obstacle, and removing the obstacle at the wall edge; the removing of the wall side barriers is specifically as follows:
let the aspect ratio threshold value of judging whether the barrier is a wall side be Z, the long side of rectangle be L, the minor side be S, the aspect ratio be λ, then:
when the length-width ratio lambda is larger than the threshold value T, the obstacle is considered as a wall, and the point cloud forming the obstacle is removed; in practical cases, small obstacles may also meet the condition, constraint is performed by adding a limiting condition that the value of the long side is greater than 3 meters, and finally the lambda expression is as follows:
step 3: performing data association on the obstacles detected by two adjacent frames of obstacle point cloud data in the obstacle array to obtain the current motion state of each obstacle and record the historical track of each obstacle so as to realize the tracking of the obstacle; a schematic diagram of the motion state predicted based on the obstacle and the historical track thereof is shown in fig. 4;
the association of the obstacle data, the current motion state of each obstacle and the record of the history track thereof are specifically as follows:
step 3.1: initializing an obstacle array to be empty, defining a Kalman object when the 1 st frame of data is acquired, predicting the position of the obstacle of the next frame, and initializing the Kalman object array; calculating to obtain the track of the obstacle by utilizing the cloud data of the obstacle points of the two adjacent frames, initializing the track just generated by a KF filter and marking the track as 'uncertain', marking the track with uncertainty as 'determined' after the two subsequent frames are matched and confirmed again, and updating the life cycle from uncertainty to certainty;
step 3.2: when the 2 nd frame data is processed, carrying out association matching on the observed value of the current frame and the predicted value of the Kalman object of the previous frame; the method comprises the steps that a bipartite graph is formed by utilizing a determined track and an observed value obtained by an obstacle detector for matching, wherein the matching is carried out by using a KNN algorithm, a Hungary algorithm or a KM algorithm, and matching results are divided into a Kalman object and the observed value which are successfully matched, a Kalman object which is not successfully matched and 3 types of observed values which are not successfully matched;
step 3.3: updating the last frame of Kalman object which is not successfully matched, and deleting the Kalman object from the Kalman array and the obstacle array if a termination condition is triggered; the method for setting the termination condition is that the Kalman object is successfully matched for 3 continuous frames, the life cycle is changed from 'determination' to 'uncertainty', and the Kalman object marked as 'uncertainty' is deleted from the obstacle array;
step 3.4: initializing a Kalman object for the current frame observation value which is not successfully matched and adding the current frame observation value into a Kalman array;
step 3.5: carrying out Kalman updating on the Kalman object successfully matched by utilizing the observed value of the current frame, triggering an obstacle condition, and adding the obstacle condition into an obstacle array;
step 3.6: updating the obstacle array to obtain the movement state of the obstacle;
step 3.7: repeating the steps 3.2 to 3.6 to update the obstacle information in real time;
step 4: predicting the track of a plurality of obstacles in a future period of time by using a constant speed model based on the step 3;
step 4.1: selecting the length of a predicted track according to the complexity of the scene; the scene complexity is determined according to the number of pedestrians and the motion state of environmental barriers;
step 4.2: for track short-order prediction, selecting a prediction mode based on a motion model; the prediction mode based on the motion model in the step 4.2 includes, but is not limited to, a constant speed model, an acceleration model, a kinematic and dynamic model, a Monte Carlo method, a Kalman filtering method and the like.
Step 4.3: for long-order track prediction, selecting a prediction mode based on a network model; the prediction mode based on the network model comprises, but is not limited to, a depth-based network model, a attention mechanism-based model, a long-term and short-term memory network, a cyclic neural network and the like, wherein long-order track prediction is only started when short-order prediction is severely distorted; .
Step 5: reading the result of the ROS global path planner and dividing the result into a plurality of sections of local paths;
step 6: realizing map-free end-to-end strategy navigation in each section of local path obtained in the step 5; as shown in fig. 5;
step 6.1: reading the barrier information acquired in the step;
step 6.2: reading global path planner information;
step 6.3: dividing the track in the global path planner information and planning a starting point;
step 6.4: judging whether the obstacle passes through a front road section, calculating an meeting position by utilizing a constant speed model through the current motion state of the obstacle and the motion state of the robot according to the characteristics of Newton's first law, and if no meeting place exists, selecting a straight run by the mobile robot; the robot is a mobile robot, such as a sweeping robot, a service robot, an AGV and the like;
step 6.5: judging whether the obstacle is within the safe distance, if so, proceeding straight, otherwise, performing strategy planning;
by calculating the current obstacle (x i ,y i ) The corresponding obstacle avoidance strategy is formulated in the area; wherein when (x i ,y i ) When the robot is positioned in a rectangular area with the width of 0.5m and the width of 4m in front of the mobile robot, the robot adopts an execution strategy of retreating for 0.5s at the speed of 2 m/s; wherein when (x i ,y i ) When the robot is positioned in a semi-elliptical area with a long axis of 4 meters and a short axis of 1 meter and a coordinate point (0, 0.5) as a center, the robot adopts a stopping strategy; when (x i ,y i ) The robot takes 0.5m/s of low-speed advance when the robot is positioned in a round area with the radius of 5 meters except the area and takes the robot as the center of a circle; the robot takes 1.5m/s forward when no obstacle exists in the detection area; when the stop time of the robot in the elliptical area is longer than 2s, the robot takes detour, firstly judging whether the robot can turn right to detour according to the direction vector of the obstacle, if so, judging that the robot can turn rightThe steering turns left when the steering turns can not pass, the steering angular speed is 0.017rad/s each time, and the time is 1s; as shown in fig. 6;
step 6.6: judging whether the right turn can avoid the obstacle, if so, turning right, otherwise, turning left;
step 6.7: and (5) repeating the steps 6.1-6.6 to realize end-to-end strategy navigation.
According to the navigation obstacle avoidance algorithm path planning method, the local cost map is removed, and the function of obstacle tracking prediction is added, so that the situation of decision errors caused by untimely updating of the local cost map is avoided, the planning is improved, and compared with a ROS conventional planning algorithm, the path planning method in a simulation environment is shortened by 50% in a scene with dynamic obstacles. The schematic diagram of the path planning result of the method and the traditional method is shown in figure 7;
table 1 algorithm performance comparison at 15m start and stop points;
when the trolley moves in a 15m corridor, the time and the path of the trolley adopting different algorithms are recorded respectively, and the passing rate of the different algorithms is finally obtained after multiple experiments are carried out. In contrast, in a 15m corridor simulation environment, the planned path is generally larger than 38m due to the existence of obstacles and the influence of dynamic obstacles, and the method is only half of the planned path of the traditional navigation, and the movement time is reduced from 50s to 22s. Because the traditional navigation can not solve the problem that the quick moving obstacle is not updated timely in the local cost map, the collision occurs because the correct path planning can not be performed when the ball with the last movement speed of 2m/s is faced. The method can avoid dynamic obstacles, and the navigation obstacle avoidance efficiency and safety of the mobile robot are greatly improved.

Claims (7)

1. A multi-obstacle prediction navigation obstacle avoidance method based on a single-line laser radar is characterized by comprising the following steps:
step 1: acquiring obstacle point cloud data detected by a single-line laser radar through an ROS (reactive oxygen species) functional package of the laser radar in a Ubuntu system, filtering the point cloud data, carrying out coordinate system transformation on the filtered point cloud data, and converting the coordinate system transformation into a global coordinate system; classifying the point cloud data converted to the global coordinates by using a clustering algorithm to finish the clustering of the whole frame of point cloud data and obtain barrier points Yun Shuzu;
step 2: performing rectangular fitting on the obstacle point cloud array obtained after the clustering classification in the step 1 to detect the obstacle, obtaining the position and shape information of the obstacle, and storing the position and shape information in the obstacle array;
step 3: performing data association on the obstacles detected by the obstacle point cloud data in the adjacent two-frame obstacle arrays to obtain the current motion state of each obstacle and record the historical track of each obstacle so as to realize the tracking of the obstacle;
step 4: predicting the track of a plurality of obstacles in a future period of time by using a constant speed model based on the step 3;
step 5: reading the result of the ROS global path planner and dividing the result into a plurality of sections of local paths;
step 6: and 5, realizing the end-to-end strategy navigation without a map in each section of local path obtained in the step 5.
2. The method for predicting navigation obstacle avoidance based on single-line lidar of claim 1, wherein step 1 specifically comprises:
step 1.1: in the time dimension and the space dimension, the point cloud data is subjected to the moving average filtering with a window of 3, specifically:
wherein P is t,i Representing the coordinates of the ith point in the global map at the time t;
obtaining coordinates of a point i under a laser radar based on an ROS global map(X i ,Y i ) Adding the displacement of the trolley relative to the global map, and calculating the coordinates of the point i under the global map, wherein the coordinates are specifically as follows:
X i =Rcosθ+dx
Y i =Rsinθ+dy
dx and dy in the formula are obtained by odometer information, and the odometer information is obtained by fusion filtering of an IMU and an encoder; r is the distance of the point i relative to the laser radar, and θ is the offset angle of the point i relative to the 0 DEG line of the laser radar;
step 1.2: performing global rough segmentation on the filtered data converted to global coordinates obtained in the step 1.1, classifying the segmented point cloud clusters by using a DBSCAN algorithm based on density clustering, and finally completing the clustering of the whole frame of data to obtain an obstacle array; the method comprises the following steps:
directly clustering the filtered obstacle point cloud data of one frame according to Euclidean distance characteristics, firstly sequentially dividing laser radar data according to the continuity of point cloud scanning before clustering, and for each laser point cloud block, starting point p index meets R i-1 =0 and R i Not equal to 0, the end point q satisfies R i Not equal to 0 and R i+1 =0; wherein the sequential segmentation algorithm comprises the following steps:
firstly, calculating Euclidean distance between two adjacent points in obstacle point cloud data; if the Euclidean distance is smaller than the threshold value T, the point cloud clusters belong to the same point cloud cluster; the threshold T is set according to the environment; adding the Euclidean distance between two adjacent points smaller than T into the current point cloud cluster C k In (a) and (b); if the Euclidean distance is greater than a certain threshold T, the point cloud clusters are not considered to belong to the same point cloud cluster; adding P to current point cloud cluster i New construction point cloud cluster C k+1 Will P i+1 Added to C k+! In (a) and (b); repeating the operation until the obstacle point cloud data of the current frame are processed, and obtaining a final processing result for a point cloud clustering algorithm;
and finally classifying the point cloud cluster arrays after the sequential segmentation through a DBSCAN algorithm of density clustering, and finally completing the clustering of the whole frame of data to obtain the obstacle point cloud arrays.
3. The multi-obstacle prediction navigation obstacle avoidance method based on single-line lidar of claim 1, wherein step 2 specifically comprises:
step 2.1: under the global coordinate system, carrying out extremum searching of coordinate values on the obstacle point cloud array to obtain the maximum value x of each point in the obstacle point cloud array on the x axis i And a minimum value x j And a maximum y on the y-axis i And a minimum value y j The expression is:
P1=(x i ,y i ),P2=(x i ,y j ),P3=(x j ,y i ),P4=(x j ,y j )
{P1,P2,P3,P4|x i =max(x),y i =max(y),x j =min(x),y j =min(x)}
step 2.2: drawing a rectangular frame by utilizing four extreme points P1, P2, P3 and P4, fitting the appearance of the obstacle, and removing the obstacle at the wall edge; the removing of the wall side barriers is specifically as follows:
let the aspect ratio threshold value of judging whether the barrier is a wall side be Z, the long side of rectangle be L, the minor side be S, the aspect ratio be λ, then:
when the length-width ratio lambda is larger than the threshold value T, the obstacle is considered as a wall, and the point cloud forming the obstacle is removed; in practical cases, small obstacles may also meet the condition, constraint is performed by adding a limiting condition that the value of the long side is greater than 3 meters, and finally the lambda expression is as follows:
4. the method for predicting navigation obstacle avoidance based on single-line lidar of claim 1, wherein the step 3 is specifically:
step 3.1: initializing an obstacle array to be empty, defining a Kalman object when the 1 st frame of data is acquired, predicting the position of the obstacle of the next frame, and initializing the Kalman object array; calculating to obtain the track of the obstacle by using the position information in the array of the obstacle of the two adjacent frames before and after, initializing the track just generated by the KF filter and marking the track as 'uncertain', marking the track with uncertainty as 'determined' after the two subsequent frames are matched and confirmed again, and updating the life cycle from uncertainty to certainty;
step 3.2: when the 2 nd frame data is processed, carrying out association matching on the observed value of the current frame and the predicted value of the Kalman object of the previous frame; the method comprises the steps that a bipartite graph is formed by utilizing a determined track and an observed value obtained by an obstacle detector for matching, wherein the matching is carried out by using a KNN algorithm, a Hungary algorithm or a KM algorithm, and matching results are divided into a Kalman object and the observed value which are successfully matched, a Kalman object which is not successfully matched and 3 types of observed values which are not successfully matched;
step 3.3: updating the last frame of Kalman object which is not successfully matched, and deleting the Kalman object from the Kalman array and the obstacle array if a termination condition is triggered; the method for setting the termination condition is that the Kalman object is successfully matched for 3 continuous frames, the life cycle is changed from 'determination' to 'uncertainty', and the Kalman object marked as 'uncertainty' is deleted from the obstacle array;
step 3.4: initializing a Kalman object for the current frame observation value which is not successfully matched and adding the current frame observation value into a Kalman array;
step 3.5: carrying out Kalman updating on the Kalman object successfully matched by utilizing the observed value of the current frame, triggering an obstacle condition, and adding the obstacle condition into an obstacle array;
step 3.6: updating the obstacle array to obtain the movement state of the obstacle;
step 3.7: repeating the steps 3.2-3.6: the real-time updating of the obstacle information is realized.
5. The method for predicting navigation obstacle avoidance based on single-line lidar of claim 1, wherein step 4 specifically comprises:
step 4.1: selecting the length of a predicted track according to the complexity of the scene; the scene complexity is determined according to the number of pedestrians and the motion state of environmental barriers;
step 4.2: for track short-order prediction, selecting a prediction mode based on a motion model;
step 4.3: for long-order prediction of the track, a prediction mode based on a network model is selected, wherein the long-order prediction of the track is only started when the short-order prediction is severely distorted.
6. The method for predicting navigation obstacle avoidance based on single-line lidar of claim 1, wherein step 6 specifically comprises:
step 6.1: reading the barrier information acquired in the step;
step 6.2: reading global path planner information;
step 6.3: dividing the track in the global path planner information and planning a starting point;
step 6.4: judging whether the obstacle passes through a front road section, calculating an meeting position by utilizing a constant speed model through the current motion state of the obstacle and the motion state of the robot according to the characteristics of Newton's first law, and if no meeting place exists, selecting a straight run by the mobile robot; the robot is a moving robot;
step 6.5: judging whether the obstacle is within the safe distance, if so, proceeding straight, otherwise, performing strategy planning;
step 6.6: judging whether the right turn can avoid the obstacle, if so, turning right, otherwise, turning left;
step 6.7: and (5) repeating the steps 6.1-6.6 to realize end-to-end strategy navigation.
7. The method for predicting navigation obstacle avoidance based on single-line lidar of claim 6, wherein the policy planning performed in step 6.5 is specifically:
by calculating the current obstacle (x i ,y i ) The corresponding obstacle avoidance strategy is formulated in the area; wherein when (x i ,y i ) A epsilon [0.5,1 when being positioned in front of a rectangular area with a width of a meter and a length of b meters of the mobile robot],b∈[2,4]The robot adopts an execution strategy of backing a/fs at a high speed fm/s, f is 2,3]The method comprises the steps of carrying out a first treatment on the surface of the When (x i ,y i ) When the robot is positioned in a semi-elliptical area with a coordinate point (0, a) as a central major axis and a minor axis of b meters and 2a meters, the robot adopts a stopping strategy; when (x i ,y i ) The robot takes f/4m/s low-speed advance when the robot is positioned in a circular area with the radius of c meters and the radius of c E [5,8 ] except the area by taking the robot as the circle center];
When no obstacle exists in the detection area, the robot advances by f/2 m/s; when the stop time of the robot in the elliptical area is longer than 2s, the robot takes detouring, firstly, whether the robot can turn right to detour to pass or not is judged according to the direction vector of the obstacle, if the right turning detour can not pass, the robot turns left, the steering angular speed is 0.017rad/s each time, and the time is 1s.
CN202310420891.5A 2023-04-19 2023-04-19 Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar Pending CN116576857A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310420891.5A CN116576857A (en) 2023-04-19 2023-04-19 Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310420891.5A CN116576857A (en) 2023-04-19 2023-04-19 Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar

Publications (1)

Publication Number Publication Date
CN116576857A true CN116576857A (en) 2023-08-11

Family

ID=87533130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310420891.5A Pending CN116576857A (en) 2023-04-19 2023-04-19 Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar

Country Status (1)

Country Link
CN (1) CN116576857A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117226810A (en) * 2023-11-13 2023-12-15 华侨大学 Rope load parallel robot and obstacle avoidance method, device and storage medium thereof
CN117406754A (en) * 2023-12-01 2024-01-16 湖北迈睿达供应链股份有限公司 Logistics robot environment sensing and obstacle avoidance method and system
CN117475115A (en) * 2023-11-11 2024-01-30 华中师范大学 Path guiding system in virtual-real fusion environment and working method thereof
CN117557977A (en) * 2023-12-28 2024-02-13 安徽蔚来智驾科技有限公司 Environment perception information acquisition method, readable storage medium and intelligent device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117475115A (en) * 2023-11-11 2024-01-30 华中师范大学 Path guiding system in virtual-real fusion environment and working method thereof
CN117226810A (en) * 2023-11-13 2023-12-15 华侨大学 Rope load parallel robot and obstacle avoidance method, device and storage medium thereof
CN117226810B (en) * 2023-11-13 2024-02-02 华侨大学 Rope load parallel robot and obstacle avoidance method, device and storage medium thereof
CN117406754A (en) * 2023-12-01 2024-01-16 湖北迈睿达供应链股份有限公司 Logistics robot environment sensing and obstacle avoidance method and system
CN117406754B (en) * 2023-12-01 2024-02-20 湖北迈睿达供应链股份有限公司 Logistics robot environment sensing and obstacle avoidance method and system
CN117557977A (en) * 2023-12-28 2024-02-13 安徽蔚来智驾科技有限公司 Environment perception information acquisition method, readable storage medium and intelligent device
CN117557977B (en) * 2023-12-28 2024-04-30 安徽蔚来智驾科技有限公司 Environment perception information acquisition method, readable storage medium and intelligent device

Similar Documents

Publication Publication Date Title
CN108983781B (en) Environment detection method in unmanned vehicle target search system
US11131993B2 (en) Methods and systems for trajectory forecasting with recurrent neural networks using inertial behavioral rollout
JP7086111B2 (en) Feature extraction method based on deep learning used for LIDAR positioning of autonomous vehicles
CN111771141B (en) LIDAR positioning for solution inference using 3D CNN network in autonomous vehicles
Pomerleau et al. Long-term 3D map maintenance in dynamic environments
CN116576857A (en) Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar
AU2019235504B2 (en) Enhanced vehicle tracking
CN114842438A (en) Terrain detection method, system and readable storage medium for autonomous driving vehicle
Jaspers et al. Multi-modal local terrain maps from vision and lidar
CN114998276B (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
Verma et al. Vehicle detection, tracking and behavior analysis in urban driving environments using road context
CN115861968A (en) Dynamic obstacle removing method based on real-time point cloud data
CN112286049A (en) Motion trajectory prediction method and device
CN114120075A (en) Three-dimensional target detection method integrating monocular camera and laser radar
Xiong et al. Road-Model-Based road boundary extraction for high definition map via LIDAR
Xue et al. Traversability analysis for autonomous driving in complex environment: A LiDAR‐based terrain modeling approach
Tian et al. DL-SLOT: Tightly-Coupled Dynamic LiDAR SLAM and 3D Object Tracking Based on Collaborative Graph Optimization
CN113741550B (en) Mobile robot following method and system
CN116385493A (en) Multi-moving-object detection and track prediction method in field environment
Cai et al. Design of Multisensor Mobile Robot Vision Based on the RBPF-SLAM Algorithm
Yan et al. RH-Map: Online Map Construction Framework of Dynamic Object Removal Based on 3D Region-wise Hash Map Structure
Sahdev Free space estimation using occupancy grids and dynamic object detection
CN115690343A (en) Robot laser radar scanning and mapping method based on visual following
Rezaei et al. A Deep Learning-Based Approach for Vehicle Motion Prediction in Autonomous Driving
Li et al. Robust srif-based lidar-imu localization for autonomous vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination