CN111258316B - Robot trajectory planning method for trend perception in dynamic environment - Google Patents

Robot trajectory planning method for trend perception in dynamic environment Download PDF

Info

Publication number
CN111258316B
CN111258316B CN202010066645.0A CN202010066645A CN111258316B CN 111258316 B CN111258316 B CN 111258316B CN 202010066645 A CN202010066645 A CN 202010066645A CN 111258316 B CN111258316 B CN 111258316B
Authority
CN
China
Prior art keywords
robot
dynamic
track
trend
constraint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010066645.0A
Other languages
Chinese (zh)
Other versions
CN111258316A (en
Inventor
刘盛
戴丰绩
王杨庆
黄圣跃
陈胜勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202010066645.0A priority Critical patent/CN111258316B/en
Publication of CN111258316A publication Critical patent/CN111258316A/en
Application granted granted Critical
Publication of CN111258316B publication Critical patent/CN111258316B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a robot track planning method for trend perception in a dynamic environment, which comprises the steps of cutting, discretizing and sequencing a path planned by a global path planner to change the path into a discrete initial track, estimating the speed of a dynamic obstacle through perception, and predicting the motion track of the dynamic obstacle; then, establishing trend tracks of the robot and the dynamic barrier according to the initial track of the robot and the motion track of the dynamic barrier, and establishing a constraint condition through the intersection and the overlapping degree of the two trend tracks; and finally, mapping the constructed constraint conditions to a hypergraph, converting the hypergraph into an unconstrained least square optimization problem with constrained approximation, solving the unconstrained least square optimization problem, and optimizing the track of the robot. According to the invention, barrier trend constraint is introduced, so that the collision probability of the robot and a dynamic barrier can be effectively reduced, the stability of track generation is improved, and safe and reliable robot motion tracks are provided for the situations of automatic driving, social service and the like.

Description

Robot trajectory planning method for trend perception in dynamic environment
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a trend sensing robot track planning method in a dynamic environment, which is suitable for real-time path planning and obstacle avoidance of a robot in a dynamic scene.
Background
In the development of robot navigation in recent decades, robot trajectory planning is a fundamental and necessary task in practical applications such as autonomous transportation and social service systems. Path planning techniques in static maps are known to be relatively mature and stable at the present stage, for example, a-star algorithm, D-star algorithm, RRT algorithm, artificial potential field method, etc., however, the real operating environment of a robot is often full of unknown obstacles and dynamic obstacles, and in such a complex situation, a pure path planning method usually collides with a dynamic obstacle because real-time performance is not high enough and cannot predict the dynamic obstacle. Therefore, it is important to plan the motion trajectory with time-related information such as displacement, velocity, acceleration, etc.
At present, the quality of a trajectory planning method is generally measured by feasibility, safety, completeness, instantaneity and optimality. The feasibility of trajectory generation is whether the trajectory can be executed by the robot; the safety is that whether the planning result effectively avoids the barrier or not; whether a required track can be found or not is the completeness; the real-time performance is the time consumption of the trajectory planning; the optimality is whether the trajectory planning satisfies kinetic constraints, kinematic constraints, obstacle constraints, and the like.
The track planning method increases the time dimension, so that once the speed information is acquired from the dynamic barrier in uniform linear motion, the planning method can predict the future space track of the barrier, and a smooth track avoiding the future route of the dynamic barrier is planned and generated. However, dynamic obstacles to avoiding a certain density of random variable movements remain a difficult challenge for trajectory planning techniques. Especially when the robot encounters a randomly-crossed obstacle on a running track, although the self-positioning of the robot is accurate, a common track planner cannot ensure that a collision-free track can not be generated. This is mainly due to two reasons:
(1) Even if the most advanced sensors and algorithms are applied, the detection result still inevitably has errors and noises, so that most trajectory planners cannot keep long-term optimality and can only deal with the most approaching dynamic obstacles in a re-planning way.
(2) Most trajectory planning methods only consider the current velocity of the obstacle and ignore other kinematic information. In other words, the movement tendency of the obstacle is very important information, which can be applied to trajectory planning to avoid collision.
Disclosure of Invention
Aiming at the problem that a safe and feasible track is drawn out by a general track planning method due to limited perception precision of a robot in a dynamic environment at present, the track planning method of the invention pays attention to the kinematic information of a dynamic barrier and aims to solve the problem of track planning caused by inaccurate perception and movement barrier.
The invention aims to provide a trend perception robot track planning method in a dynamic environment, which is characterized in that on the basis of maintaining the kinematic constraint and the obstacle constraint of a robot, the kinematic constraint, the speed constraint and the acceleration constraint of the robot are included in the kinematic dynamics, and the initial feasible path generated in advance by a global path planner is modified on line by predicting the kinematic trend of a dynamic obstacle in a certain time range to find out a safe and optimal track suitable for the robot to execute.
In order to achieve the purpose, the technical scheme of the invention comprises the following steps:
a robot trajectory planning method for trend perception in a dynamic environment comprises the following steps:
sensing the state of the robot and detecting the state of the dynamic barrier by using a three-dimensional laser system, and positioning the robot and the dynamic barrier as a mass point to the same world coordinate system in a configuration space;
cutting, discretizing and sequencing the path planned by the global path planner to change the path into a discrete initial trajectory, and predicting the motion trajectory of the dynamic obstacle by perceiving and estimating the speed of the dynamic obstacle;
constructing trend tracks of the robot and the dynamic barrier according to the initial track of the robot and the motion track of the dynamic barrier, and constructing a constraint condition through the intersection and the overlapping degree of the trend tracks of the robot and the dynamic barrier;
and mapping the constructed constraint conditions to a hypergraph, converting the hypergraph into an unconstrained least square optimization problem with constrained approximation, solving the unconstrained least square optimization problem, and optimizing the track of the robot.
Further, the robot and the dynamic obstacle are taken as a particle, wherein the dynamic obstacle is represented as an outward swelling circle.
Further, the cutting, discretization and time-sequencing of the path planned by the global path planner to make it become a discrete initial trajectory, and predicting the motion trajectory of the dynamic obstacle by perceptually estimating the speed of the dynamic obstacle, includes:
cutting the passed path according to the position of the robot updated in real time, and taking the rest path as the input for constructing an initial track;
discretizing the cut path, the robot state is represented as c i Wherein c is i From robot position x i ,y i And orientation beta i The set of the robot states after discretization processing is as follows:
R={c i } i=0...n
two consecutive robots state passing time interval Δ t i In relation to each other, T is a set of time intervals, denoted as:
T={Δt i } i=0...n-1
the acquired robot initial trajectory P is composed of robot states and time intervals, and is represented as:
P={(c 0 ,Δt 0 ,c 1 ,Δt 1 ,c 2 ,..,Δt n-1 ,c n )};
based on initial position of dynamic obstacle
Figure BDA0002376159750000031
And speed
Figure BDA0002376159750000032
Predicted position O of dynamic obstacle at time t j (t) is:
Figure BDA0002376159750000039
wherein n +1 is the number of discrete points of the robot state in the initial track, i represents the serial number of the discrete state, and j represents the serial number of the dynamic barrier.
Further, the constructing of the trend tracks of the robot and the dynamic obstacle according to the initial track of the robot and the motion track of the dynamic obstacle, and the constructing of the constraint condition through the intersection and the overlapping degree of the trend tracks of the robot and the dynamic obstacle, includes:
according to the initial track and the dynamic obstacle motion track of the robot, respective trend track prediction equations are constructed:
Figure BDA0002376159750000033
wherein p is i Represents the position of the robot on the track at a certain moment, and is represented by x i ,y i Composition v i Which is representative of the speed of the robot,
Figure BDA0002376159750000034
and
Figure BDA0002376159750000035
respectively, the trend parameters of the robot and the dynamic obstacle,
Figure BDA0002376159750000036
and
Figure BDA0002376159750000037
respectively representing the states of the robot and the dynamic barrier after the track is predicted;
and calculating and acquiring the distance between the robot state at each moment and each detected obstacle state:
δ(c i ,O j (t))=||O j (t)-p i || 2
the distance is used for judging the distance between the robot and the dynamic barrier and educing barrier constraint z i
Figure BDA0002376159750000038
z i (c i ,t)≥0;
Wherein delta min Is the minimum clearance between the dynamic barrier and the robot;
by sigma min And (3) constructing obstacle trend constraint, expressed as a minimum crossing distance for maintaining a safety gap between the robot and the obstacle:
Figure BDA0002376159750000041
constructing kinematic dynamics constraint:
h i (c i+1 ,c i )=0
v i (c i+1 ,c i ,Δt i )≥0
α i (c i+1 ,c i ,c i-1 ,Δt i+1 ,Δt i )≥0
wherein h is i Is a kinematic constraint of the robot, v i Is a speed constraint, α i Is an acceleration constraint.
Further, mapping the constructed constraint conditions to a hypergraph, and converting the constructed constraint conditions into an unconstrained least squares optimization problem with constrained approximation, comprising:
by P * Represents the trajectory after optimization, w T Representing a weight factor for measuring an objective function, wherein f (P) is the objective function and represents a kinematic dynamics constraint, a barrier constraint and a barrier trend constraint, and the unconstrained least squares optimization problem with constrained approximation is represented as:
Figure BDA0002376159750000042
compared with the prior art, the robot trajectory planning method for trend perception in a dynamic environment has the following beneficial effects:
when the ground robot autonomously executes tasks in places such as squares or roads, the ground robot encounters dynamic obstacles such as pedestrians and driving vehicles, the dynamic obstacles have the characteristic of target driving, and events crossing the preset track of the robot easily occur. By predicting the trend track of the robot and the dynamic obstacle and introducing obstacle trend constraint, the collision probability of the robot and the dynamic obstacle can be effectively reduced, the stability of track generation is improved, and safe and reliable robot motion tracks are provided for situations such as automatic driving and social service.
Drawings
FIG. 1 is a flow chart of a method for planning a trajectory of a robot based on trend sensing in a dynamic environment according to the present invention;
FIG. 2 is a real-time trajectory planning result in a dynamic environment according to an embodiment of the present invention;
FIG. 3 is a partial hypergraph structure constructed in accordance with an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As shown in fig. 1, an embodiment of a method for planning a robot trajectory by trend sensing in a dynamic environment is provided, including:
step S1, sensing the state of the robot and detecting the state of the dynamic obstacle by using a three-dimensional laser system, and positioning the robot and the dynamic obstacle as a mass point to the same world coordinate system in a configuration space.
In the embodiment, a robot (or other robots, which is not limited in this application) is deployed in a dynamic unknown environment, the robot is provided with a computing terminal with a desktop computing performance level, a chassis center support is provided with a three-dimensional laser radar, the laser radar has a 360-degree horizontal view and a 30-degree vertical view and is used as original data input of a self-positioning and obstacle sensing system, the robot positioning and two-dimensional map construction are provided by a laser mileage and mapping algorithm, the dynamic obstacle position is detected by using a point cloud data clustering method, and the estimated speed and the direction of the dynamic obstacle can be calculated according to the position and time interval obtained by continuously detecting the same dynamic obstacle twice. The robot state of the embodiment comprises the position and the orientation of the robot, and the dynamic obstacle state comprises the position, the speed and the orientation of the dynamic obstacle.
The present embodiment shows the robot and the dynamic obstacle in the configuration space, which is the space of all possible configurations of the robot system, wherein the robot is shown as a mass point in the configuration space, and the dynamic obstacle is shown as an outward expanding circle in the space, that is, an expanding layer is added on the dynamic obstacle, and the thickness of the expanding layer can be set, for example, 0.35 meter. The expansion layer is added on the dynamic barrier in the configuration space, so that the calculation complication caused by the difference of the robot shape can be simplified, and the problem can be simplified by neglecting the shape and the size of the robot.
The embodiment takes the initial coordinate system of the three-dimensional laser radar positioning of the robot as a world coordinate system, and the positions and the orientations of the robot and the dynamic obstacle are converted into the same world coordinate system through positioning information.
As shown in fig. 2, the robot R of the present embodiment senses the state of the robot and detects the state of the dynamic obstacle by using the three-dimensional laser system when encountering the dynamic obstacle A, B in the configuration space, that is, the area where the task is executed, and positions the position of the robot and the real-time position of the dynamic obstacle as a mass point in the configuration space in the same world coordinate system.
And S2, cutting, discretizing and sequencing the path planned by the global path planner to change the path into a discrete initial path, and predicting the motion path of the dynamic obstacle by perceiving and estimating the speed of the dynamic obstacle. The path planning is the most basic link of the robot navigation, and means how to find a proper motion path from a starting point to a terminal point in a working environment with obstacles, so that the robot can safely and collision-free bypass all the obstacles in the motion process. The global path planner is a device for planning a path for a robot, and plans a path for the robot, and the robot moves from one point to another point according to the planned path.
The robot comprises a two-dimensional map, a global path planner, an A-algorithm and a B-algorithm, wherein the two-dimensional map is used for representing environment information around the robot as a static obstacle on the two-dimensional map, the global path planner plans a passable path according to a target point given by a user and the static obstacle, and the A-algorithm can find an optimal path from a starting position to the target position and avoiding the obstacle in the two-dimensional map. Path planning by the global path planner is a relatively mature technology, and is not described herein.
In this embodiment, cutting, discretizing, and time-sequencing the path planned by the global path planner to make it become a discrete initial trajectory, including:
and S2.1, cutting the passed path according to the position of the robot updated in real time, and taking the rest path as the input for constructing the initial track.
In the process that the robot goes to the target location, the global path planner plans a corresponding path, and in this embodiment, the passed path is cut according to the robot position updated in real time, and the remaining path is used as an input for constructing an initial trajectory.
And S2.2, discretizing the cut path.
The robot state of the present embodiment is denoted by c i Wherein c is i From robot position x i ,y i And orientation beta i The path cut in step S2.1 is discretized at a certain distance, and a default robot orientation is added to each discrete point to form discrete robot states, the set of robot states being:
R={c i } i=0...n
the formula shows that n +1 robot states form a robot track, n +1 is the number of discrete points of the robot states in the initial track, and i represents a discrete state serial number.
And 2.3, carrying out time sequencing on the cut path.
Two consecutive robot states pass through a time interval Δ t i And (3) mutual connection:
T={Δt i } i=0...n-1
where T is the set of time intervals.
Step 2.4, acquiring the initial track P of the robot as follows:
P={(c 0 ,Δt 0 ,c 1 ,Δt 1 ,c 2 ,..,Δt n-1 ,c n )}
the acquired initial trajectory of the robot consists of the robot state and the time interval.
And 2.5, predicting the track of the dynamic obstacle.
Assume that there are N dynamic obstacles in this dynamic scene, whose absolute positions change over time. Time t is represented as:
Figure BDA0002376159750000071
the predicted state of the dynamic obstacle at time t can be expressed as:
O j (t),j=0,...,N
when the initial position of the dynamic obstacle
Figure BDA0002376159750000072
And velocity
Figure BDA0002376159750000073
Measured by the method in step S1, the predicted state of the dynamic obstacle at time t can be obtained:
Figure BDA0002376159750000074
where j represents a dynamic barrier number. That is, for each detected obstacle, the movement trend thereof can be simply estimated in the future, so that the dynamic obstacle prediction state O is obtained j (t) and time interval Deltat i The motion trajectory of the dynamic obstacle can be composed, and the initial position of the dynamic obstacle can be updated in real time to cope with a complex dynamic environment.
In this embodiment, the initial trajectory and the movement trajectory are time-synchronized and kept within a certain time range T, the initial trajectory and the obstacle movement trajectory are both trajectories that start to be predicted at the same time, and the predicted time range T is specified by the user. The longer the predicted time, the more optimal the trajectory can remain, but the greater the computational power requirements, so the time horizon will remain in the appropriate range.
And S3, constructing trend tracks of the robot and the dynamic barrier according to the initial track of the robot and the motion track of the dynamic barrier, and constructing a constraint condition according to the intersection and overlapping degree of the trend tracks of the robot and the dynamic barrier.
Obstacle trend constraint is introduced in the embodiment, the robot track meets kinematic constraint, speed constraint, acceleration constraint, obstacle constraint and the like, the embodiment does not limit specific constraint conditions, and a person skilled in the art can increase the constraint conditions or reduce the constraint conditions appropriately. Wherein kinematic constraints, velocity constraints, acceleration constraints may also be collectively referred to as kinematic constraints.
S3.1, establishing respective trend track prediction equations according to the initial track of the robot and the motion track of the dynamic obstacle:
Figure BDA0002376159750000081
wherein p is i Represents the position of the robot on the track at a certain moment, and is represented by x i ,y i Composition v i Which is representative of the speed of the robot,
Figure BDA0002376159750000082
and
Figure BDA0002376159750000083
trend parameters of the robot and the dynamic obstacle, respectively, determine the magnitude and orientation of the trend,
Figure BDA0002376159750000084
and
Figure BDA0002376159750000085
respectively representing the states of the robot and the dynamic obstacle after the trend track is predicted.
Step 3.2, calculating and obtaining the distance between the robot state at each moment and each detected obstacle state:
δ(c i ,O j (t))=||O j (t)-p i || 2
the distance is used for judging the distance between the robot and the dynamic barrier, and barrier constraint z is led out i
Figure BDA0002376159750000086
z i (c i ,t)≥0;
Wherein delta min Is the minimum gap between the dynamic obstacle and the robot, and satisfies the inequality constraint to keep the safety of the robot track.
The distance between the two for each state is applied to the activation function as follows:
Figure BDA0002376159750000087
wherein gamma is epsilon [0,1]For conditional judgement, δ req Is an empirical setting and k represents a scale factor. When γ is not equal to zero, it means that the distance between the robot and the obstacle at the predicted time has reached the set threshold, and the dual-trend trajectory intersection degree determination of the next stage is required to prevent a potential collision risk. However, before determining the intersection of the dual trend trajectories, the method proceeds by
Figure BDA0002376159750000088
And detecting whether the trend track of the robot and the trend track of the obstacle are parallel and collinear in advance, and after the condition is eliminated, calculating the track intersection condition as follows:
Figure BDA0002376159750000089
if it is not
Figure BDA00023761597500000810
And
Figure BDA00023761597500000811
and if the collision is larger than or equal to zero, the trajectory planning method considers that the robot has potential collision with the obstacle.
Only the obstacle trend constraint is satisfied:
Figure BDA0002376159750000091
the probability of a collision of the robot can be substantially reduced when the obstacle trend constraint is satisfied, wherein σ min Expressed as a minimum crossing distance between the robot and the obstacle to maintain a safety gap.
Step S3.3, the stable and reliable trajectory optimization method further needs to satisfy the following inequality constraints:
h i (c i+1 ,c i )=0
v i (c i+1 ,c i ,Δt i ) 0 of
α i (c i+1 ,c i ,c i-1 ,Δt i+1 ,Δt i )≥0
Wherein h is i Is the kinematic constraint of the robot, which is mainly realized by limiting the states of two adjacent robots to be on a common arc length with constant curvature, v i Is speed constraint, and limits the speed of the robot within a certain range, alpha i And the robot acceleration is limited within a normal range by the acceleration constraint.
And 4, mapping the constructed constraint conditions to a hypergraph, converting the hypergraph into an unconstrained least square optimization problem with constrained approximation, solving the unconstrained least square optimization problem, and optimizing the track of the robot.
The robot trajectory optimization problem of this embodiment is mapped into a representation of a hypergraph, such as a simplified partial hypergraph shown in fig. 3, which includes two circular dynamic barriers O 0 (0),O 0 (Δt 0 ) Two, twoState of the individual robot c 0 ,c 1 And a time interval Δ t 0 . The absolute position of the dynamic barrier changes over time, but it acts as a fixed vertex in the hypergraph, meaning that it cannot be altered by the optimization program. Trend parameter tau i An edge as a hypergraph connects two vertices as a constraint of the target.
Specifically, after the robot trajectory optimization problem is mapped to the hypergraph to become an unconstrained least squares optimization problem with constrained approximation, the approximated optimization problem can be expressed as:
Figure BDA0002376159750000092
wherein P is * For the executable trajectory of the robot after optimization, w T Representing a weighting factor that scales the objective function. f (P) represents that the objective function comprises a kinematic constraint, an obstacle constraint and an obstacle trend constraint. This optimization problem can be solved by an efficient non-linear least squares solver, such as the Levernberg-Marquardt method. The constraint conditions only depend on few variables and parameters, so that the Hessian matrix in the solving process is sparse, and real-time solving is facilitated.
In each trajectory optimization step, the algorithm dynamically adds new robot states or removes robot states in order to adjust the spatial and temporal resolution to the remaining trajectory length or planning range. The track optimizer adjusts the overall track shape according to weight optimization, so that the overall track shape can achieve the aims of safety, smoothness, feasibility and optimization.
According to the steps, the robot track result shown in fig. 2 can be obtained, and it can be seen that the track can enable the robot to safely avoid the randomly-crossed dynamic obstacles and can simultaneously keep a smooth robot track.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (3)

1. A robot track planning method for trend perception in a dynamic environment is characterized in that the robot track planning method for trend perception in the dynamic environment comprises the following steps:
sensing the state of the robot and detecting the state of the dynamic barrier by using a three-dimensional laser system, and positioning the robot and the dynamic barrier as a mass point to the same world coordinate system in a configuration space;
cutting, discretizing and sequencing the path planned by the global path planner to change the path into a discrete initial trajectory, and predicting the motion trajectory of the dynamic obstacle by perceiving and estimating the speed of the dynamic obstacle;
constructing trend tracks of the robot and the dynamic barrier according to the initial track of the robot and the motion track of the dynamic barrier, and constructing a constraint condition through the intersection and the overlapping degree of the trend tracks of the robot and the dynamic barrier;
mapping the constructed constraint conditions to a hypergraph, converting the hypergraph into an unconstrained least square optimization problem with constrained approximation, solving the unconstrained least square optimization problem, and optimizing the track of the robot;
the cutting, discretization and time sequencing of the path planned by the global path planner is changed into a discrete initial trajectory, the speed of the dynamic barrier is estimated through perception, and the motion trajectory of the dynamic barrier is predicted, and the method comprises the following steps:
cutting the passed path according to the position of the robot updated in real time, and taking the rest path as the input for constructing an initial track;
discretizing the cut path, the robot state is represented as c i Wherein c is i From robot position x i ,y i And an orientation beta i The set of the robot states after discretization processing is as follows:
R={c i } i=0…n
two consecutive robot states pass through a time interval Δ t i In relation to each other, T is a set of time intervals, denoted as:
T={Δt i } i=0…n-1
the acquired initial trajectory P of the robot is composed of robot states and time intervals, and is represented as:
P={(c 0 ,Δt 0 ,c 1 ,Δt 1 ,c 2 ,..,Δt n-1 ,c n )};
based on initial position of dynamic obstacle
Figure FDA0003894537220000011
And speed
Figure FDA0003894537220000012
Predicted position O of dynamic obstacle at time t j (t) is:
Figure FDA0003894537220000013
wherein n +1 is the number of state discrete points of the robot in the initial track, i represents a discrete state serial number, and j represents a dynamic barrier serial number;
the method comprises the following steps of constructing trend tracks of the robot and the dynamic obstacle according to the initial track of the robot and the motion track of the dynamic obstacle, and constructing constraint conditions according to the intersection and the overlapping degree of the trend tracks of the robot and the dynamic obstacle, wherein the constraint conditions comprise:
according to the initial track of the robot and the motion track of the dynamic obstacle, respective trend track prediction equations are constructed:
Figure FDA0003894537220000021
wherein p is i Represents the position of the robot on the track at a certain moment, and is represented by x i ,y i Composition v i Representing a robotThe speed of the motor is controlled by the speed of the motor,
Figure FDA0003894537220000022
and
Figure FDA0003894537220000023
are trend parameters of the robot and the dynamic obstacle respectively,
Figure FDA0003894537220000024
and
Figure FDA0003894537220000025
respectively representing the states of the robot and the dynamic barrier after the track is predicted;
and calculating and acquiring the distance between the robot state at each moment and each detected obstacle state:
δ(c i ,O j (t))=||O j (t)-p i || 2
the distance is used for judging the distance between the robot and the dynamic barrier and educing barrier constraint z i
Figure FDA0003894537220000026
z i (c i ,t)≥0;
Wherein delta min Is the minimum clearance between the dynamic barrier and the robot;
by sigma min And (3) constructing obstacle trend constraint, expressed as a minimum crossing distance for maintaining a safety gap between the robot and the obstacle:
Figure FDA0003894537220000027
constructing kinematic dynamics constraint:
h i (c i+1 ,c i )=0
ν i (c i+1 ,c i ,Δt i )≥0
α i (c i+1 ,c i ,c i-1 ,Δt i+1 ,Δt i )≥0
wherein h is i Is a kinematic constraint of the robot, v i Is a speed constraint, α i Is an acceleration constraint.
2. The method of claim 1, wherein the robot and the dynamic barrier are considered as a particle, and wherein the dynamic barrier is represented as an outwardly expanding circle.
3. The method for planning the trajectory of the robot with the trend perception in the dynamic environment according to claim 1, wherein the constructed constraint conditions are mapped to a hypergraph and converted into an unconstrained least squares optimization problem with constrained approximation, and the method comprises the following steps:
by P * Represents the trajectory after optimization, w T Representing a weight factor for measuring an objective function, wherein f (P) is the objective function and represents a kinematic dynamics constraint, a barrier constraint and a barrier trend constraint, and the unconstrained least squares optimization problem with constrained approximation is represented as:
Figure FDA0003894537220000031
CN202010066645.0A 2020-01-20 2020-01-20 Robot trajectory planning method for trend perception in dynamic environment Active CN111258316B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010066645.0A CN111258316B (en) 2020-01-20 2020-01-20 Robot trajectory planning method for trend perception in dynamic environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010066645.0A CN111258316B (en) 2020-01-20 2020-01-20 Robot trajectory planning method for trend perception in dynamic environment

Publications (2)

Publication Number Publication Date
CN111258316A CN111258316A (en) 2020-06-09
CN111258316B true CN111258316B (en) 2022-12-09

Family

ID=70952477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010066645.0A Active CN111258316B (en) 2020-01-20 2020-01-20 Robot trajectory planning method for trend perception in dynamic environment

Country Status (1)

Country Link
CN (1) CN111258316B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112859842B (en) * 2020-12-31 2022-06-14 中山大学 Path following navigation method and system thereof
US20220253048A1 (en) * 2021-02-01 2022-08-11 University Of South Florida Hypergraph search for real-time multi-robot task allocation in a smart factory
CN113034579B (en) * 2021-03-08 2023-11-24 江苏集萃微纳自动化系统与装备技术研究所有限公司 Dynamic obstacle track prediction method of mobile robot based on laser data
CN113074734B (en) * 2021-03-23 2023-05-30 北京三快在线科技有限公司 Track planning method and device, storage medium and electronic equipment
CN112987760B (en) * 2021-05-10 2021-09-07 北京三快在线科技有限公司 Trajectory planning method and device, storage medium and electronic equipment
CN113296514B (en) * 2021-05-24 2022-09-27 南开大学 Local path optimization method and system based on sparse banded structure
CN113467457B (en) * 2021-07-08 2024-07-26 无锡太机脑智能科技有限公司 Graph optimization path planning method for edge-attached cleaning of unmanned sanitation truck
CN113568435B (en) * 2021-09-24 2021-12-24 深圳火眼智能有限公司 Unmanned aerial vehicle autonomous flight situation perception trend based analysis method and system
CN113687659B (en) * 2021-10-26 2022-01-25 武汉鼎元同立科技有限公司 Optimal trajectory generation method and system based on digital twinning
CN118210318B (en) * 2024-05-22 2024-08-16 陕西欧卡电子智能科技有限公司 Unmanned ship planning method and device, computer equipment and unmanned ship

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107885209A (en) * 2017-11-13 2018-04-06 浙江工业大学 Obstacle avoidance method based on dynamic window and virtual target point
CN108153310A (en) * 2017-12-22 2018-06-12 南开大学 A kind of Mobile Robot Real-time Motion planing method based on human behavior simulation
CN109491389A (en) * 2018-11-23 2019-03-19 河海大学常州校区 A kind of robot trace tracking method with constraint of velocity
CN109782763A (en) * 2019-01-18 2019-05-21 中国电子科技集团公司信息科学研究院 A kind of method for planning path for mobile robot under dynamic environment
CN110320809A (en) * 2019-08-19 2019-10-11 杭州电子科技大学 A kind of AGV track correct method based on Model Predictive Control

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107885209A (en) * 2017-11-13 2018-04-06 浙江工业大学 Obstacle avoidance method based on dynamic window and virtual target point
CN108153310A (en) * 2017-12-22 2018-06-12 南开大学 A kind of Mobile Robot Real-time Motion planing method based on human behavior simulation
CN109491389A (en) * 2018-11-23 2019-03-19 河海大学常州校区 A kind of robot trace tracking method with constraint of velocity
CN109782763A (en) * 2019-01-18 2019-05-21 中国电子科技集团公司信息科学研究院 A kind of method for planning path for mobile robot under dynamic environment
CN110320809A (en) * 2019-08-19 2019-10-11 杭州电子科技大学 A kind of AGV track correct method based on Model Predictive Control

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Christoph Rösmann etc.Trajectory modification considering dynamic constraints of autonomous robots.《Seventh German conference on robotics》.2012,1-6. *

Also Published As

Publication number Publication date
CN111258316A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN111258316B (en) Robot trajectory planning method for trend perception in dynamic environment
CN110262495B (en) Control system and method capable of realizing autonomous navigation and accurate positioning of mobile robot
JP6640777B2 (en) Movement control system, movement control device and program
EP3517893B1 (en) Path and speed optimization fallback mechanism for autonomous vehicles
CN111670468B (en) Moving body behavior prediction device and moving body behavior prediction method
EP3959576A1 (en) Methods and systems for trajectory forecasting with recurrent neural networks using inertial behavioral rollout
EP3106836B1 (en) A unit and method for adjusting a road boundary
CN106406320A (en) Robot path planning method and robot planning route
US9255989B2 (en) Tracking on-road vehicles with sensors of different modalities
US11300663B2 (en) Method for predicting a motion of an object
US20200363816A1 (en) System and method for controlling autonomous vehicles
CN114442621A (en) Autonomous exploration and mapping system based on quadruped robot
RU2487419C1 (en) System for complex processing of information of radio navigation and self-contained navigation equipment for determining real values of aircraft navigation parameters
US11970185B2 (en) Data structure for storing information relating to an environment of an autonomous vehicle and methods of use thereof
CN114647246A (en) Local path planning method and system for time-space coupling search
US20230056589A1 (en) Systems and methods for generating multilevel occupancy and occlusion grids for controlling navigation of vehicles
JP7167732B2 (en) map information system
Yan Positioning of logistics and warehousing automated guided vehicle based on improved LSTM network
Deusch et al. Improving localization in digital maps with grid maps
WO2023118946A1 (en) Method and system for navigating an autonomous vehicle in an open-pit site
Worrall et al. A probabilistic method for detecting impending vehicle interactions
CN110857861B (en) Track planning method and system
CN115246394A (en) Intelligent vehicle overtaking obstacle avoidance method
Jacome et al. Road curvature decomposition for autonomous guidance
KR101622176B1 (en) Method generating terrain data for global path planning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant