CN113034579A - Dynamic obstacle track prediction method of mobile robot based on laser data - Google Patents

Dynamic obstacle track prediction method of mobile robot based on laser data Download PDF

Info

Publication number
CN113034579A
CN113034579A CN202110248650.8A CN202110248650A CN113034579A CN 113034579 A CN113034579 A CN 113034579A CN 202110248650 A CN202110248650 A CN 202110248650A CN 113034579 A CN113034579 A CN 113034579A
Authority
CN
China
Prior art keywords
laser
mobile robot
dynamic
global
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110248650.8A
Other languages
Chinese (zh)
Other versions
CN113034579B (en
Inventor
林睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Jicui Micro Nano Automation System And Equipment Technology Research Institute Co ltd
Original Assignee
Jiangsu Jicui Micro Nano Automation System And Equipment Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Jicui Micro Nano Automation System And Equipment Technology Research Institute Co ltd filed Critical Jiangsu Jicui Micro Nano Automation System And Equipment Technology Research Institute Co ltd
Priority to CN202110248650.8A priority Critical patent/CN113034579B/en
Publication of CN113034579A publication Critical patent/CN113034579A/en
Application granted granted Critical
Publication of CN113034579B publication Critical patent/CN113034579B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Development Economics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Automation & Control Theory (AREA)
  • Game Theory and Decision Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a dynamic obstacle track prediction method of a mobile robot based on laser data, and belongs to the technical field of intelligent control of mobile robots. The mobile robot senses the surrounding working scene information through the laser sensor, matches the stored scene global map based on the laser data acquired in real time, uses a Monte Carlo positioning algorithm to realize accurate global positioning, and estimates the local path by using a dynamic window method according to the optimal pose obtained by the global positioning. The method and the device solve the problem that the mobile robot accurately predicts the motion trail of the surrounding dynamic obstacles when moving in the dynamic complex scene, thereby realizing safer and more effective obstacle avoidance and local path planning, meeting the accurate and safe autonomous navigation requirement in the complex dynamic scene of the mobile robot, and aiming at improving the safety and the fluency of the autonomous navigation of the mobile robot.

Description

Dynamic obstacle track prediction method of mobile robot based on laser data
Technical Field
The invention belongs to the technical field of intelligent control of mobile robots, and particularly relates to a dynamic obstacle trajectory prediction method of a mobile robot based on laser data.
Background
The mobile robot has served various industries developed in the present society, such as factories, hospitals, homes, hotels, exhibition halls, restaurants, and the like, and mainly performs tasks such as logistics, transportation, distribution, guidance, and the like. The mobile robot can navigate autonomously in the scenes, the adaptability and the safety of the dynamic complex environment of the mobile robot are important expressions of the intellectualization of the mobile robot, and the mobile robot can effectively, safely and smoothly bypass dynamic obstacles and is an important function of local path planning of the mobile robot.
Considering that a mobile robot works in a working scene, different obstacles, such as people and other mobile robots, inevitably move in the environment. However, the conventional local path planning algorithm has no prediction function for the dynamic obstacle, and only can perform obstacle avoidance according to the current laser or the surrounding scene sensed by other sensors, so that corresponding obstacle avoidance actions are performed on the mobile robot facing the dynamic obstacle, unnecessary shaking and drifting often occur, and even collision occurs due to too small safe distance with the dynamic obstacle. Such occurrence may greatly reduce the operation efficiency of the mobile robot and even cause safety accidents.
Disclosure of Invention
Aiming at the defects in the prior art, the dynamic barrier track prediction method based on laser data for the mobile robot provided by the invention meets the accurate and safe autonomous navigation requirement in a complex dynamic scene of the mobile robot, and improves the safety and the movement fluency of the autonomous navigation of the mobile robot.
In order to achieve the above purpose, the invention adopts the technical scheme that:
the scheme provides a dynamic barrier track prediction method of a mobile robot based on laser data, which comprises the following steps:
s1, sensing surrounding environment information in real time by using a laser sensor, matching laser data of a current time frame with a stored scene global map, and acquiring the optimal global pose estimation of the current time frame by using a Monte Carlo positioning algorithm;
s2, distinguishing dynamic obstacles in the surrounding environment according to the optimal global pose estimation of the current time frame and a scene global map;
s3, carrying out segmentation and clustering on the laser data of the dynamic obstacle to obtain polygon cluster description of the dynamic obstacle;
s4, calculating according to the current time frame optimal global pose estimation and the dynamic obstacle polygon cluster information of the mobile robot to obtain the position and size description of the dynamic obstacle in the global map;
s5, analyzing the position and size description of the dynamic barrier in the global map of the subsequent adjacent time frame, and predicting the motion track of the dynamic barrier by using a Kalman filtering algorithm;
and S6, according to the predicted dynamic barrier motion track, performing path planning by using a dynamic window method to obtain a motion track bypassing the dynamic barrier, and completing dynamic barrier track prediction of the mobile robot based on laser data.
The invention has the beneficial effects that: according to the invention, the mobile robot senses the surrounding working scene information through the laser sensor, based on the matching of the laser data acquired in real time and the stored scene global map, accurate global positioning is realized by applying a Monte Carlo positioning algorithm, and the local path planning is carried out by using a dynamic window method according to the optimal pose estimation obtained by the global positioning, so that the mobile robot can safely and smoothly avoid encountered temporary dynamic obstacles while advancing to the final target position. Through the design, the problem that the motion trail of the surrounding dynamic obstacles is accurately predicted when the mobile robot moves in a dynamic complex scene is solved, so that safer and more effective obstacle avoidance and local path planning are realized, and the safety and the fluency of autonomous navigation of the mobile robot are improved.
Further, the step S1 includes the following steps:
s101, installing a forward 2D laser sensor above the mobile robot, scanning surrounding environment information by using the laser sensor, and acquiring laser data { L | gamma } of the current time framek,nN1.. N }, wherein,n denotes the number of the laser spots of the k-th frame, N denotes the total number of the laser spots of the k-th frame, γk,nRepresenting the polar coordinate distance corresponding to the laser point of the current time frame, and L representing the laser data set;
s102, laser data of the current time frame and a stored scene global map are compared
Figure BDA0002965059500000032
Matching is carried out, and the best global pose estimation of the current time frame is obtained by utilizing a Monte Carlo positioning algorithm, wherein omega isX,YRepresenting grid points [ X, Y ] in a 2D global map of a scene]M represents a two-dimensional map grid point set, and the grid points are discrete square grids with physical width Δ w, Θ represents a scene global map.
The beneficial effects of the further scheme are as follows: the mobile robot determines the time frame global optimum pose estimation of the mobile robot relative to a given scene map by matching laser data with a stored scene global map by applying a Monte Carlo global positioning algorithm, thereby realizing accurate global positioning.
Still further, the expression of the optimal global pose estimation of the current time frame in step S102 is as follows:
Pk=[pk qk θk]T
wherein, PkRepresenting the best global pose estimate, pkAnd pkRepresents the coordinates of the mobile robot in the global coordinate system in the k-th frame, thetakIndicating the heading angle.
The beneficial effects of the further scheme are as follows: accurate global positioning is the basis and precondition for the mobile robot to realize autonomous navigation such as working scene path planning.
Still further, the step S2 includes the steps of:
s201, projecting the dynamic obstacle track into a scene global map:
Figure BDA0002965059500000031
wherein the content of the first and second substances,
Figure BDA0002965059500000033
representing grid points [ X, Y ] in a 2D global map of a scene]M represents a two-dimensional map grid point set, and the grid points are discrete square grids with a physical width Δ w;
s202, carrying out feature matching on the optimal global pose estimation of the current time frame and a scene global map, and removing grids of the current time frame and the scene global map
Figure BDA0002965059500000041
Matched laser data points
Figure BDA0002965059500000042
And [ X ]k,Yk]E.g. M, distinguishing dynamic obstacles in the surrounding environment, and the expression is as follows:
Figure BDA0002965059500000043
Figure BDA0002965059500000044
wherein, γk,nRepresents the polar distance, [ x ] corresponding to the laser point at the current time framek,n yk,n]TRepresenting the coordinate of the nth laser discrete point in the current k frames of laser data in the coordinate system of the mobile robot, pkAnd q iskRepresents the coordinates of the mobile robot in the global coordinate system in the k-th frame, thetakIndicating heading angle, xFLThe offset of the laser sensor installed in the forward direction of the mobile robot is represented, namely the forward offset of a polar coordinate system of the laser sensor relative to a rotation center coordinate system of the mobile robot, M represents a two-dimensional map grid point set, and X represents the grid point set of the two-dimensional mapk,YkAnd the grid serial number corresponding to the projection of the laser data point of k frames at the current time in the global map is shown.
The beneficial effects of the further scheme are as follows: effectively dividing the laser original data points to distinguish the laser data points belonging to the dynamic barrier;
still further, the step S3 includes the steps of:
s301, judging whether the distance between two adjacent laser points in the dynamic barrier laser data is larger than a preset threshold value, if so, entering the step S302, otherwise, judging the next laser point, and continuing the step S301;
s302, according to the judgment result, the obstacle area { omega | O detected by the laser sensor is subjected to polygond,d=1,2,...ZkCharacterizing to obtain polygon cluster description of the dynamic barrier, and entering step S4, wherein each laser clustering region RjCorresponding to one barrier region
Figure BDA0002965059500000045
Wherein the content of the first and second substances,
Figure BDA0002965059500000046
each represents an obstacle region Od,kThe center coordinates of the center of the optical fiber,
Figure BDA0002965059500000051
indicates the barrier region Od,kThe starting vertex of the polygon of (1),
Figure BDA0002965059500000052
indicates the barrier region Od,kThe polygon end vertex.
The beneficial effects of the further scheme are as follows: and selecting a polygon cluster to describe the dynamic barrier, and representing by using the simplest and most effective method, such as size, a starting point and an end point.
Still further, the condition that whether the distance between two adjacent laser points in the dynamic obstacle laser data is greater than the preset threshold value in step S301 is determined as follows:
Rk={L|ln=(γnn),n∈[0,N]}
Figure BDA00029650595000000510
Figure BDA0002965059500000053
wherein R iskRepresenting the laser data set at the current time frame, L representing the laser data set, LnRepresenting a laser data point polar coordinate expression, gammanRepresents the polar coordinate distance, theta, corresponding to the laser data point nnRepresenting the polar angle, x, of the laser data point correspondencesFL,i、xFL,i+1、yFL,iAnd yFL,i+1Representing the coordinates, D, of the laser data points i and i +1 in the mobile robot coordinate systemthWhich represents a pre-set threshold value, is,
Figure BDA0002965059500000054
and
Figure BDA0002965059500000055
respectively, the coordinates of the global coordinate system,
Figure BDA0002965059500000056
representing obstacle regions, s, not belonging to the global mapiAnd eiAnd respectively representing the laser point index subscripts corresponding to the laser data point clusters.
The beneficial effects of the further scheme are as follows: and effectively dividing laser data points belonging to the dynamic obstacles, distinguishing each single discrete dynamic obstacle, and performing area characteristic description in the next step.
Still further, the expression of the polygon cluster description of the dynamic obstacle in step S302 is as follows:
Figure BDA0002965059500000057
Figure BDA0002965059500000058
wherein the content of the first and second substances,
Figure BDA0002965059500000059
each represents an obstacle region Od,kCoordinate of the center point of (a) in the mobile robot coordinate system, xkAnd ykIndicates belonging to the barrier region Od,kCorresponding to the coordinates in the mobile robot coordinate system, ziIndicates the barrier region Od,kCorresponding laser spot starting number, wiIndicates the barrier region Od,kCorresponding laser spot end number, wi-zi+1 represents a composition barrier region Od,kThe number of laser spots.
The beneficial effects of the further scheme are as follows: and calculating the most important description information such as the size and the dimension of the dynamic obstacle to prepare for the subsequent coordinate conversion.
Still further, the expression of the description of the position and size of the dynamic obstacle in the global map in step S4 is as follows:
Figure BDA0002965059500000061
Figure BDA0002965059500000062
X=pk+xFL cosθk-lcos(α+θk+π/4)
Y=qk+xFL sinθk-lsin(α+θk+π/4)
wherein l and α represent the barrier region O, respectivelyd,kDistance and angle from the center in the mobile robot coordinate system, [ X Y ]]TIndicates the barrier region Od,kCoordinates of the relevant point in the global coordinate system, pkAnd q iskIndicating that the robot is moving in the k frameCoordinates in a global coordinate system, θkIndicating heading angle, xFLRepresents the offset of the laser sensor installed in the forward direction of the mobile robot, namely the forward offset of the polar coordinate system of the laser sensor relative to the rotating center coordinate system of the mobile robot,
Figure BDA0002965059500000063
each represents an obstacle region Od,kThe center coordinates of (a).
The beneficial effects of the further scheme are as follows: and projecting the description information such as the position and the size of the dynamic obstacle from the laser sensor coordinate system to a global map coordinate system to prepare for predicting the motion trail of the dynamic obstacle in the next step.
Still further, the expression for predicting the motion trajectory of the dynamic obstacle in step S5 is as follows:
Figure BDA0002965059500000064
wherein E isd,k+1Representing a predicted dynamic obstacle motion trajectory,
Figure BDA0002965059500000065
indicates the obstacle region O calculated from the current time k framed,kThe coordinates of the center point of (c) at the next time k +1 frame in the global coordinate system,
Figure BDA0002965059500000066
indicates the obstacle region O calculated from the current time k framed,kThe coordinates of the starting point in the global coordinate system at the next time k +1 frame,
Figure BDA0002965059500000071
indicates the obstacle region O calculated from the current time k framed,kThe coordinates of the end point of (2) in the global coordinate system at the next time k +1 frame.
The beneficial effects of the further scheme are as follows: the position, the size and the like of the next time frame of the predicted dynamic obstacle can be described in detail, so that the follow-up local path planning is facilitated, and the obstacle avoidance is safe and effective.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Examples
As shown in fig. 1, the method for predicting the trajectory of a dynamic obstacle of a mobile robot based on laser data according to the present invention is implemented as follows:
s1, sensing the surrounding environment information in real time by using a laser sensor, matching the laser data of the current time frame with the stored scene global map, and obtaining the optimal global position and pose estimation of the current time frame by using a Monte Carlo positioning algorithm, wherein the implementation method comprises the following steps:
s101, installing a forward 2D laser sensor above the mobile robot, scanning surrounding environment information by using the laser sensor, and acquiring laser data { L | gamma } of the current time framek,nN1.. N }, where N denotes the sequence number of the k-th frame laser spot, N denotes the total number of the k-th frame laser spots, and γ denotes the total number of the k-th frame laser spotsk,nRepresenting the polar coordinate distance corresponding to the laser point of the current time frame, and L representing the laser data set;
s102, laser data of the current time frame and a stored scene global map are compared
Figure BDA0002965059500000081
Matching is carried out, and the best global pose estimation of the current time frame is obtained by utilizing a Monte Carlo positioning algorithm, wherein omega isX,YRepresenting grid points [ X, Y ] in a 2D global map of a scene]Value of (A)M represents a two-dimensional map grid point set, and grid points are discrete square grids with physical width Δ w, and Θ represents a scene global map.
In the embodiment, the mobile robot autonomously navigates in a dynamic complex scene, senses the surrounding environment information in real time through the laser sensor, and matches the stored scene global map through the laser data to realize accurate global positioning. A forward 2D laser sensor is arranged above the mobile robot, and the laser sensor scans surrounding working scenes in real time to acquire laser data which is characterized by being { L | Gammak,nN1.. N } is laser point data of k frames; n is the serial number of the k frame laser points; n is the total number of k frames of laser points; gamma rayk,nIs the corresponding polar coordinate distance; but the scene global map is already stored
Figure BDA0002965059500000082
Middle omegaX,YRepresenting grid points [ X, Y ] in a 2D global map of a scene]M is a set of two-dimensional map grid points, which are typically discrete square grids of physical width Δ w. The mobile robot matches the stored scene global map through the laser data by applying a Monte Carlo global positioning algorithm, namely, the time frame global optimum pose estimation of the mobile robot relative to the given scene map is determined, so that the accurate global positioning is realized, and the current time k frame global pose information is represented as Pk=[pk qk θk]TWherein p iskAnd pkRepresents the coordinates of the k-frame mobile robot in the global coordinate system, thetakIndicating the heading angle. Accurate global positioning is the basis and precondition for the mobile robot to realize autonomous navigation such as working scene path planning.
S2, distinguishing dynamic obstacles in the surrounding environment according to the optimal global pose estimation of the current time frame and a scene global map;
in this embodiment, the detection data of the laser sensor mainly reflects the surface morphology of the observed target object, is discontinuous 2D sampling, and belongs to non-dense surface discrete points. Mobile robot using laser sensor in real timeSensing surrounding environment, obtaining distance and angle information including obstacles relative to the mobile robot, and obtaining 2D point data which is fan-shaped discrete, namely { L | gammak,nN is 1.. N } laser point data of k frames, and N has a fixed correspondence with an angle in consideration of the fact that the laser point data is discrete and the angular resolution is fixed.
In this embodiment, the scene global map is used to represent the working scene environment information of the mobile robot, and the positioning, path planning, and the like of the mobile robot all need to rely on the scene global map. Considering that a 2D grid map is constructed by a general laser sensor, that is, a grid set in which fixed road signs (obstacle regions), travelable regions (obstacle-free regions), and the like in a scene are scattered into a certain size is projected onto a laser scanning plane. Generally, 1 represents an obstacle, 0 represents no obstacle, and considering the effective scanning range of the laser, the undetermined area is characterized by-1, and the expression is:
Figure BDA0002965059500000091
wherein the content of the first and second substances,
Figure BDA0002965059500000092
representing grid points in a two-dimensional global grid map X Y]M is a set of two-dimensional map grid points, which are typically discrete square grids of physical width Δ w.
In this embodiment, it is considered that the data points obtained by the laser sensor are not all data points of a dynamic obstacle, and P is estimated based on the global optimal pose of the current k frames of the robotk=[pk qk θk]TAnd global map
Figure BDA0002965059500000093
Performing feature matching to remove grid point set of global map
Figure BDA0002965059500000094
Matched laser data points
Figure BDA0002965059500000095
[Xk,Yk]The E is M, namely the laser discrete point of the temporary dynamic barrier detected by the current k frames of laser, so as to distinguish the related dynamic barrier in the environment, and the related formula is as follows:
Figure BDA0002965059500000096
Figure BDA0002965059500000097
wherein, Pk=[pk qk θk]TGlobal optimum pose estimation for mobile robot current k frames, Cm=[xFL 0 0]TFor the mounting coordinates of the laser sensor in the mobile robot coordinate system, [ x ]k,n yk,n]TAnd the coordinates of the nth laser discrete point in the current k frames of laser data in the mobile robot coordinate system are represented.
S3, carrying out segmentation clustering on the laser data of the dynamic obstacle to obtain the polygon cluster description of the dynamic obstacle, wherein the realization method comprises the following steps:
s301, judging whether the distance between two adjacent laser points in the dynamic barrier laser data is larger than a preset threshold value, if so, entering the step S302, otherwise, judging the next laser point, and continuing the step S301;
s302, according to the judgment result, the obstacle area { omega | O detected by the laser sensor is subjected to polygond,d=1,2,...ZkCharacterizing to obtain polygon cluster description of the dynamic barrier, wherein each laser clustering region RjCorresponding to one barrier region
Figure BDA0002965059500000101
Wherein the content of the first and second substances,
Figure BDA0002965059500000102
each represents an obstacle region Od,kThe center coordinates of the center of the optical fiber,
Figure BDA0002965059500000103
indicates the barrier region Od,kThe starting vertex of the polygon of (1),
Figure BDA0002965059500000104
indicates the barrier region Od,kThe polygon end vertex.
In this embodiment, the laser data point segmentation and clustering is to preprocess the original laser data points, remove the fixed landmark points in the global grid map, and obtain descriptions such as the position and size of the dynamic obstacle cluster.
In this embodiment, the segmentation and clustering of the laser data points is to divide the data belonging to the same sub-surface type into the same group by a certain classification method according to the difference between the sub-surface types of the outer surface of the object. The core problem of detection is that if original laser obstacle points cannot be accurately clustered, the same obstacle can be subjected to multiple types of phenomena, and therefore obstacle detection effectiveness is affected. The fact that the same obstacle can be accurately matched at different moments is a precondition for detecting the state of the dynamic obstacle, and therefore the clustering determines the effectiveness and rapidity of detection of the position, the size and the like of the obstacle. The method takes the dynamic barrier property in the actual working scene of the mobile robot into consideration, and adopts a region segmentation method according to the Euclidean distance to segment data, and the principle is that the distance between two adjacent laser scanning points is calculated as a standard. When the distance between the two laser points exceeds a certain preset threshold value, the area is divided, and if the distance does not exceed the preset threshold value, the next data point is judged. Assume that the front main laser cluster of the robot is Od,kWherein j is 1,2o. The coordinates of 2 adjacent laser points in the robot coordinate system are respectively
Figure BDA0002965059500000111
And
Figure BDA0002965059500000112
and the coordinates of the global coordinate system are respectively
Figure BDA0002965059500000113
And
Figure BDA0002965059500000114
then:
Rk={L|ln=(γnn),n∈[0,N]}
Figure BDA0002965059500000115
Figure BDA0002965059500000116
and is
Figure BDA0002965059500000117
Wherein the threshold value DthA safe distance according to the width of the mobile robot. Then O isd,kWill be composed of laser data points
Figure BDA0002965059500000118
To
Figure BDA0002965059500000119
Composition of wherein siAnd eiRespectively represent laser data point clusters Od,kThe corresponding laser spot index subscript,
Figure BDA00029650595000001110
a representation that does not belong to an obstacle area in the global map, i.e. does not belong to a fixed landmark, may be considered a temporary dynamic obstacle.
In this embodiment, the dynamic obstacles in the working scene of the mobile robot mainly include pedestrians and other mobile robots, and the laser light only acquires a part of the outer contour curve of the mobile robot. To better characterize these discrete contour sampling points, multiple applications are primarily madeThe edge shape represents the barrier region [ omega ] O detected by the laserd,d=1,2,...ZkIn which each laser cluster region RjCorresponding to one barrier region Od,k,ZkRepresenting the number of obstacles observed for the current k frames. While
Figure BDA00029650595000001111
Wherein
Figure BDA00029650595000001112
Each represents an obstacle region Od,kThe center coordinates of the center of the optical fiber,
Figure BDA00029650595000001113
barrier region Od,kThe starting vertex of the polygon of (1),
Figure BDA00029650595000001114
barrier region Od,kThe polygon end vertex. Then:
Figure BDA00029650595000001115
Figure BDA00029650595000001116
wherein the content of the first and second substances,
Figure BDA00029650595000001117
each represents an obstacle region Od,kCoordinate of the center point of (a) in the mobile robot coordinate system, xkAnd ykIndicates belonging to the barrier region Od,kCorresponding to the coordinates in the mobile robot coordinate system, ziIndicates the barrier region Od,kCorresponding laser spot starting number, wiIndicates the barrier region Od,kCorresponding laser spot end number, wi-zi+1 denotes the multi-modal edge region O constituting the barrierd,kThe number of laser spots.
S4, calculating according to the current time frame optimal global pose estimation and the dynamic obstacle polygon cluster information of the mobile robot to obtain the position and size description of the dynamic obstacle in the global map;
in this embodiment, the polygon description of the dynamic obstacle in the mobile robot coordinate system is converted to the global coordinate system:
Figure BDA0002965059500000121
Figure BDA0002965059500000122
X=pk+xFLcosθk-lcos(α+θk+π/4)
Y=qk+xFLsinθk-lsin(α+θk+π/4)
wherein l and α represent the barrier region O, respectivelyd,kDistance and angle from the center in the robot coordinate system, [ X Y ]]TIndicates the barrier region Od,kCoordinates of the relevant point in the global coordinate system, pkAnd q iskRepresents the coordinates of the mobile robot in the global coordinate system in the k-th frame, thetakIndicating heading angle, xFLAnd represents the offset of the laser sensor installed in the forward direction of the robot, namely the forward offset of a polar coordinate system of the laser sensor relative to a rotation center coordinate system of the robot. [ X Y]TIs barrier region Od,kThe coordinates of the relevant points in the world coordinate system, expressed above without taking into account subscripts for simplicity.
Figure BDA0002965059500000123
For the description in the dynamic obstacle robot coordinate system, the characterization in the global map is:
Figure BDA0002965059500000124
only the coordinates are transformed, and the outline and the related dimension are kept unchanged。
S5, analyzing the position and size description of the dynamic barrier in the global map of the subsequent adjacent time frame, and predicting the motion trail of the dynamic barrier by using a Kalman filtering algorithm;
in this embodiment, Kalman Filtering (KF) is mainly used for estimation of state quantities in a discrete time system, and the core of the algorithm is a system state equation and a system measurement equation, which can realize optimal linear filtering of random signals in the sense of minimum mean square estimation error. The next temporal frame position of the dynamic obstacle in the global map of the scene can be predicted by using kalman filtering. In a specific application, the state quantity Φ and the covariance matrix P in kalman filtering need to be given initial values, and the state noise matrix Q and the measurement noise matrix R need to be determined according to noise statistical characteristics. And (3) acquiring the position and other track information of the dynamic barrier at the next time k +1 frame by using the Kalman filtering algorithm:
Figure BDA0002965059500000131
and S6, according to the predicted dynamic barrier motion track, performing path planning by using a dynamic window method to obtain a motion track bypassing the dynamic barrier, and completing dynamic barrier track prediction of the mobile robot based on laser data.
In this embodiment, based on the predicted Dynamic obstacle motion trajectory, the mobile robot performs a local path planning algorithm by using a Dynamic Window Approach (DWA) in combination with laser data acquired by a current frame to generate a series of linear velocity v and angular velocity
Figure BDA0002965059500000132
The mobile robot follows the linear velocity v and the angular velocity
Figure BDA0002965059500000133
And executing the movement to realize the safe and effective movement around the dynamic barrier.
The mobile robot senses the surrounding working scene information through the laser sensor, based on the real-time acquired laser data, the matching is carried out on the stored scene global map, the accurate global positioning is realized by applying a Monte Carlo positioning algorithm, the real-time acquired laser data is subjected to segmentation clustering processing in a mobile robot coordinate system, the polygon cluster description of the dynamic barrier is calculated, then the Kalman filtering algorithm is applied to analyze and predict the motion tracks of the dynamic barrier in the next time frame of the global map, such as the coordinate, the size and the like, the mobile robot carries out local path planning by applying a dynamic window method according to the motion tracks, the temporary dynamic barrier encountered is safely and smoothly avoided while the mobile robot moves towards the final target position, and the safe and effective barrier avoidance is realized. The invention can effectively predict the track of the dynamic barrier, and avoid the problems of head shaking and tail drifting of the mobile robot when the mobile robot encounters the dynamic barrier in the autonomous navigation of the dynamic complex scene, even collision caused by too small safe distance with the dynamic barrier, and the like. The method and the device solve the problem that the mobile robot accurately predicts the motion trail of the surrounding dynamic obstacles when moving in the dynamic complex scene, thereby realizing safer and more effective obstacle avoidance and local path planning, meeting the accurate and safe autonomous navigation requirement in the complex dynamic scene of the mobile robot, and aiming at improving the safety and the fluency of the autonomous navigation of the mobile robot.

Claims (9)

1. A dynamic obstacle track prediction method of a mobile robot based on laser data is characterized by comprising the following steps:
s1, sensing surrounding environment information in real time by using a laser sensor, matching laser data of a current time frame with a stored scene global map, and acquiring the optimal global pose estimation of the current time frame by using a Monte Carlo positioning algorithm;
s2, distinguishing dynamic obstacles in the surrounding environment according to the optimal global pose estimation of the current time frame and a scene global map;
s3, carrying out segmentation and clustering on the laser data of the dynamic obstacle to obtain polygon cluster description of the dynamic obstacle;
s4, calculating according to the current time frame optimal global pose estimation and the dynamic obstacle polygon cluster information of the mobile robot to obtain the position and size description of the dynamic obstacle in the global map;
s5, analyzing the position and size description of the dynamic barrier in the global map of the subsequent adjacent time frame, and predicting the motion trail of the dynamic barrier by using a Kalman filtering algorithm;
and S6, according to the predicted dynamic barrier motion track, performing path planning by using a dynamic window method to obtain a motion track bypassing the dynamic barrier, and completing dynamic barrier track prediction of the mobile robot based on laser data.
2. The method for predicting the trajectory of a dynamic obstacle of a mobile robot based on laser data according to claim 1, wherein said step S1 comprises the steps of:
s101, installing a forward 2D laser sensor above the mobile robot, scanning surrounding environment information by using the laser sensor, and acquiring laser data { L | gamma } of the current time framek,nN1.. N }, where N denotes the sequence number of the k-th frame laser spot, N denotes the total number of the k-th frame laser spots, and γ denotes the total number of the k-th frame laser spotsk,nRepresenting the polar coordinate distance corresponding to the laser point of the current time frame, and L representing the laser data set;
s102, laser data of the current time frame and a stored scene global map are compared
Figure FDA0002965059490000011
Matching is carried out, and the best global pose estimation of the current time frame is obtained by utilizing a Monte Carlo positioning algorithm, wherein omega isX,YRepresenting grid points [ X, Y ] in a 2D global map of a scene]M represents a two-dimensional map grid point set, and the grid points are discrete square grids with physical width Δ w, Θ represents a scene global map.
3. The method for predicting the trajectory of the mobile robot based on the dynamic obstacle according to claim 2, wherein the expression of the optimal global pose estimation of the current time frame in step S102 is as follows:
Pk=[pk qk θk]T
wherein, PkRepresenting the best global pose estimate, pkAnd pkRepresents the coordinates of the mobile robot in the global coordinate system in the k-th frame, thetakIndicating the heading angle.
4. The method for predicting the trajectory of a dynamic obstacle of a mobile robot based on laser data according to claim 1, wherein said step S2 comprises the steps of:
s201, projecting the dynamic obstacle track into a scene global map:
Figure FDA0002965059490000021
wherein the content of the first and second substances,
Figure FDA0002965059490000022
representing grid points [ X, Y ] in a 2D global map of a scene]M represents a two-dimensional map grid point set, and the grid points are discrete square grids with a physical width Δ w;
s202, carrying out feature matching on the optimal global pose estimation of the current time frame and a scene global map, and removing grids of the current time frame and the scene global map
Figure FDA0002965059490000023
Matched laser data points
Figure FDA0002965059490000024
And [ X ]k,Yk]E.g. M, distinguishing dynamic obstacles in the surrounding environment, and the expression is as follows:
Figure FDA0002965059490000025
Figure FDA0002965059490000026
wherein, γk,nRepresents the polar distance, [ x ] corresponding to the laser point at the current time framek,n yk,n]TRepresenting the coordinate of the nth laser discrete point in the current k frames of laser data in the coordinate system of the mobile robot, pkAnd q iskRepresents the coordinates of the mobile robot in the global coordinate system in the k-th frame, thetakIndicating heading angle, xFLThe offset of the laser sensor installed in the forward direction of the mobile robot is represented, namely the forward offset of a polar coordinate system of the laser sensor relative to a rotation center coordinate system of the mobile robot, M represents a two-dimensional map grid point set, and X represents the grid point set of the two-dimensional mapk,YkAnd the grid serial number corresponding to the projection of the laser data point of k frames at the current time in the global map is shown.
5. The method for predicting the trajectory of a dynamic obstacle of a mobile robot based on laser data according to claim 1, wherein said step S3 comprises the steps of:
s301, judging whether the distance between two adjacent laser points in the dynamic barrier laser data is larger than a preset threshold value, if so, entering the step S302, otherwise, judging the next laser point, and continuing the step S301;
s302, according to the judgment result, the obstacle area { omega | O detected by the laser sensor is subjected to polygond,d=1,2,...ZkCharacterizing to obtain polygon cluster description of the dynamic barrier, and entering step S4, wherein each laser clustering region RjCorresponding to one barrier region
Figure FDA0002965059490000038
Wherein the content of the first and second substances,
Figure FDA0002965059490000032
each represents an obstacle region Od,kThe center coordinates of the center of the optical fiber,
Figure FDA0002965059490000033
indicates the barrier region Od,kThe starting vertex of the polygon of (1),
Figure FDA0002965059490000034
indicates the barrier region Od,kThe polygon end vertex.
6. The method for predicting the trajectory of the mobile robot based on the laser data of claim 5, wherein the condition that whether the distance between two adjacent laser points in the laser data of the dynamic obstacle is greater than the preset threshold in step S301 is satisfied is as follows:
Rk={L|ln=(γnn),n∈[0,N]}
Figure FDA0002965059490000035
Figure FDA0002965059490000036
and is
Figure FDA0002965059490000037
Wherein R iskRepresenting the laser data set at the current time frame, L representing the laser data set, LnRepresenting a laser data point polar coordinate expression, gammanRepresents the polar coordinate distance, theta, corresponding to the laser data point nnRepresenting the polar angle, x, of the laser data point correspondencesFL,i、xFL,i+1、yFL,iAnd yFL,i+1Representing the coordinates, D, of the laser data points i and i +1 in the mobile robot coordinate systemthWhich represents a pre-set threshold value, is,
Figure FDA0002965059490000041
and
Figure FDA0002965059490000042
respectively, the coordinates of the global coordinate system,
Figure FDA0002965059490000043
representing obstacle regions, s, not belonging to the global mapiAnd eiAnd respectively representing the laser point index subscripts corresponding to the laser data point clusters.
7. The method for predicting the trajectory of the dynamic obstacle based on the laser data of the mobile robot as claimed in claim 5, wherein the expression of the polygon cluster description of the dynamic obstacle in step S302 is as follows:
Figure FDA0002965059490000044
Figure FDA0002965059490000045
wherein the content of the first and second substances,
Figure FDA0002965059490000046
each represents an obstacle region Od,kCoordinate of the center point of (a) in the mobile robot coordinate system, xkAnd ykIndicates belonging to the barrier region Od,kCorresponding to the coordinates in the mobile robot coordinate system, ziIndicates the barrier region Od,kCorresponding laser spot starting number, wiIndicates the barrier region Od,kCorresponding laser spot end number, wi-zi+1 represents a composition barrier region Od,kThe number of laser spots.
8. The method for predicting the trajectory of the dynamic obstacle based on the laser data of the mobile robot as claimed in claim 1, wherein the expression of the description of the position and size of the dynamic obstacle in the global map in step S4 is as follows:
Figure FDA0002965059490000047
Figure FDA0002965059490000048
X=pk+xFLcosθk-lcos(α+θk+π/4)
Y=qk+xFLsinθk-lsin(α+θk+π/4)
wherein l and α represent the barrier region O, respectivelyd,kDistance and angle from the center in the mobile robot coordinate system, [ X Y ]]TIndicates the barrier region Od,kCoordinates of the relevant point in the global coordinate system, pkAnd q iskRepresents the coordinates of the mobile robot in the global coordinate system in the k-th frame, thetakIndicating heading angle, xFLRepresents the offset of the laser sensor installed in the forward direction of the mobile robot, namely the forward offset of the polar coordinate system of the laser sensor relative to the rotating center coordinate system of the mobile robot,
Figure FDA0002965059490000051
each represents an obstacle region Od,kThe center coordinates of (a).
9. The method for predicting the trajectory of a dynamic obstacle based on laser data of a mobile robot according to claim 1, wherein the expression for predicting the trajectory of the dynamic obstacle in step S5 is as follows:
Figure FDA0002965059490000052
wherein E isd,k+1Representing a predicted dynamic obstacle motion trajectory,
Figure FDA0002965059490000053
indicates the obstacle region O calculated from the current time k framed,kThe coordinates of the center point of (c) at the next time k +1 frame in the global coordinate system,
Figure FDA0002965059490000054
indicates the obstacle region O calculated from the current time k framed,kThe coordinates of the starting point in the global coordinate system at the next time k +1 frame,
Figure FDA0002965059490000055
indicates the obstacle region O calculated from the current time k framed,kThe coordinates of the end point of (2) in the global coordinate system at the next time k +1 frame.
CN202110248650.8A 2021-03-08 2021-03-08 Dynamic obstacle track prediction method of mobile robot based on laser data Active CN113034579B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110248650.8A CN113034579B (en) 2021-03-08 2021-03-08 Dynamic obstacle track prediction method of mobile robot based on laser data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110248650.8A CN113034579B (en) 2021-03-08 2021-03-08 Dynamic obstacle track prediction method of mobile robot based on laser data

Publications (2)

Publication Number Publication Date
CN113034579A true CN113034579A (en) 2021-06-25
CN113034579B CN113034579B (en) 2023-11-24

Family

ID=76466678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110248650.8A Active CN113034579B (en) 2021-03-08 2021-03-08 Dynamic obstacle track prediction method of mobile robot based on laser data

Country Status (1)

Country Link
CN (1) CN113034579B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113568435A (en) * 2021-09-24 2021-10-29 深圳火眼智能有限公司 Unmanned aerial vehicle autonomous flight situation perception trend based analysis method and system
CN113641734A (en) * 2021-08-12 2021-11-12 驭势科技(北京)有限公司 Data processing method, device, equipment and medium
CN114200945A (en) * 2021-12-13 2022-03-18 哈尔滨工业大学芜湖机器人产业技术研究院 Safety control method of mobile robot
CN114545925A (en) * 2022-01-11 2022-05-27 遨博(北京)智能科技有限公司 Compound robot control method and compound robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018054080A1 (en) * 2016-09-23 2018-03-29 深圳大学 Method and device for updating planned path of robot
CN108762264A (en) * 2018-05-22 2018-11-06 重庆邮电大学 The dynamic obstacle avoidance method of robot based on Artificial Potential Field and rolling window
CN110361027A (en) * 2019-06-25 2019-10-22 马鞍山天邦开物智能商务管理有限公司 Robot path planning method based on single line laser radar Yu binocular camera data fusion
CN110471422A (en) * 2019-08-29 2019-11-19 南京理工大学 The detection of obstacles and automatic obstacle avoiding method of intelligent wheel chair
CN111258316A (en) * 2020-01-20 2020-06-09 浙江工业大学 Robot trajectory planning method for trend perception in dynamic environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018054080A1 (en) * 2016-09-23 2018-03-29 深圳大学 Method and device for updating planned path of robot
CN108762264A (en) * 2018-05-22 2018-11-06 重庆邮电大学 The dynamic obstacle avoidance method of robot based on Artificial Potential Field and rolling window
CN110361027A (en) * 2019-06-25 2019-10-22 马鞍山天邦开物智能商务管理有限公司 Robot path planning method based on single line laser radar Yu binocular camera data fusion
CN110471422A (en) * 2019-08-29 2019-11-19 南京理工大学 The detection of obstacles and automatic obstacle avoiding method of intelligent wheel chair
CN111258316A (en) * 2020-01-20 2020-06-09 浙江工业大学 Robot trajectory planning method for trend perception in dynamic environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘召;宋立滨;于涛;耿美晓;: "基于行人轨迹预测的全向移动机器人路径规划", 计算机仿真, no. 01 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641734A (en) * 2021-08-12 2021-11-12 驭势科技(北京)有限公司 Data processing method, device, equipment and medium
CN113641734B (en) * 2021-08-12 2024-04-05 驭势科技(北京)有限公司 Data processing method, device, equipment and medium
CN113568435A (en) * 2021-09-24 2021-10-29 深圳火眼智能有限公司 Unmanned aerial vehicle autonomous flight situation perception trend based analysis method and system
CN113568435B (en) * 2021-09-24 2021-12-24 深圳火眼智能有限公司 Unmanned aerial vehicle autonomous flight situation perception trend based analysis method and system
CN114200945A (en) * 2021-12-13 2022-03-18 哈尔滨工业大学芜湖机器人产业技术研究院 Safety control method of mobile robot
CN114200945B (en) * 2021-12-13 2024-04-02 长三角哈特机器人产业技术研究院 Safety control method of mobile robot
CN114545925A (en) * 2022-01-11 2022-05-27 遨博(北京)智能科技有限公司 Compound robot control method and compound robot

Also Published As

Publication number Publication date
CN113034579B (en) 2023-11-24

Similar Documents

Publication Publication Date Title
CN113034579A (en) Dynamic obstacle track prediction method of mobile robot based on laser data
CN110675307B (en) Implementation method from 3D sparse point cloud to 2D grid graph based on VSLAM
CN108647646B (en) Low-beam radar-based short obstacle optimized detection method and device
Jeong et al. Road-SLAM: Road marking based SLAM with lane-level accuracy
Pomerleau et al. Long-term 3D map maintenance in dynamic environments
CN108983781A (en) A kind of environment detection method in unmanned vehicle target acquisition system
Bascetta et al. Towards safe human-robot interaction in robotic cells: an approach based on visual tracking and intention estimation
Sato et al. Multilayer lidar-based pedestrian tracking in urban environments
CN112650235A (en) Robot obstacle avoidance control method and system and robot
CN110009029B (en) Feature matching method based on point cloud segmentation
Sales et al. Adaptive finite state machine based visual autonomous navigation system
CN114998276B (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
CN116576857A (en) Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar
Nguyen et al. Confidence-aware pedestrian tracking using a stereo camera
Santos et al. Tracking of multi-obstacles with laser range data for autonomous vehicles
Dornhege et al. Visual odometry for tracked vehicles
Nguyen et al. Deep learning-based multiple objects detection and tracking system for socially aware mobile robot navigation framework
CN113741550A (en) Mobile robot following method and system
Márquez-Gámez et al. Active visual-based detection and tracking of moving objects from clustering and classification methods
CN103679746A (en) object tracking method based on multi-information fusion
US20230185317A1 (en) Information processing device, information processing system, method, and program
US20220155455A1 (en) Method and system for ground surface projection for autonomous driving
CN111239761B (en) Method for indoor real-time establishment of two-dimensional map
Zhang et al. Autonomous indoor exploration of mobile robots based on door-guidance and improved dynamic window approach
Yamada et al. Vision based obstacle avoidance and target tracking for autonomous mobile robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant