CN110187707B - Unmanned equipment running track planning method and device and unmanned equipment - Google Patents

Unmanned equipment running track planning method and device and unmanned equipment Download PDF

Info

Publication number
CN110187707B
CN110187707B CN201910462102.8A CN201910462102A CN110187707B CN 110187707 B CN110187707 B CN 110187707B CN 201910462102 A CN201910462102 A CN 201910462102A CN 110187707 B CN110187707 B CN 110187707B
Authority
CN
China
Prior art keywords
target
time
constraint condition
moment
planning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910462102.8A
Other languages
Chinese (zh)
Other versions
CN110187707A (en
Inventor
付圣
颜诗涛
任冬淳
丁曙光
钱德恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN201910462102.8A priority Critical patent/CN110187707B/en
Publication of CN110187707A publication Critical patent/CN110187707A/en
Application granted granted Critical
Publication of CN110187707B publication Critical patent/CN110187707B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a method and a device for planning a running track of unmanned equipment and the unmanned equipment, wherein a specific implementation mode of the method comprises the following steps: acquiring a first constraint condition corresponding to a time sequence of a target operation plan; acquiring a second constraint condition corresponding to each target moment except the last moment in the moment sequence; for each target time, performing the following operations: executing a first operation or a second operation based on the first constraint condition and a second constraint condition corresponding to the target time; the first operation is executed based on a pre-trained target deep neural network, and the second operation is executed based on a preset rule; and generating a planning track of the target operation plan based on the plurality of determined planning track points. The implementation method not only can exert the advantages of the preset rule, but also avoids the problem of poor flexibility caused by excessively depending on the preset rule, and enables the operation planning result to be more reasonable.

Description

Unmanned equipment running track planning method and device and unmanned equipment
Technical Field
The application relates to the technical field of unmanned driving, in particular to a method and a device for planning a running track of unmanned equipment and the unmanned equipment.
Background
Currently, an operation plan for the unmanned aerial vehicle is generally set in advance according to information of an environment where the unmanned aerial vehicle is located based on experience of people. When operation planning is carried out, the information of the environment where the current unmanned equipment is located needs to be collected in real time, a large number of tracks are generated according to the information of the environment where the current unmanned equipment is located and a preset rule, and one track is selected as a target track of the operation planning. However, the operation planning performed by the above method depends heavily on the preset rules, and the flexibility is poor, thereby reducing the rationality of the operation planning result.
Disclosure of Invention
In order to solve one of the above technical problems, the present application provides a method and an apparatus for planning a trajectory of an unmanned aerial vehicle, and an unmanned aerial vehicle.
According to a first aspect of the embodiments of the present application, there is provided a method for planning a trajectory of an unmanned aerial vehicle, including:
acquiring a first constraint condition corresponding to a time sequence of a target operation plan; and
acquiring a second constraint condition corresponding to each target moment except the last moment in the moment sequence;
for each target time, performing the following operations: executing a first operation or a second operation based on the first constraint condition and a second constraint condition corresponding to the target time to determine a planned track point corresponding to the next time of the target time in the time sequence; the first operation is executed based on a pre-trained target deep neural network, and the second operation is executed based on a preset rule;
and generating a planning track of the target operation plan based on the plurality of determined planning track points.
Optionally, the first constraint condition includes:
the motion state parameter of the target device at the last moment in the time sequence;
a size parameter of an obstacle detected by the target device;
for any target time, the second constraint condition corresponding to the target time comprises:
the motion state parameter of the target equipment at the target moment;
the motion state parameter of the obstacle at the target moment; and
a motion state parameter of the obstacle at each time instant in the time instant sequence after the target time instant.
Optionally, the first constraint further includes: a size parameter of the target device.
Optionally, before obtaining the first constraint condition and obtaining the second constraint condition, the method further includes:
respectively determining current motion state parameters of the target device and the obstacle;
and respectively estimating the motion state parameters of the target device and the obstacle at each moment in the time sequence based on the current motion state parameters of the target device and the obstacle.
Optionally, the executing the first operation or the second operation includes:
executing a first operation or a second operation according to a preset execution probability; wherein the execution probability comprises a first probability of executing the first operation and a second probability of executing the second operation; the sum of the first probability and the second probability is 1.
Optionally, for any target time, the first operation is executed based on a pre-trained target deep neural network in the following manner:
taking a standard normally distributed random number as a random variable;
and inputting the first constraint condition, the second constraint condition corresponding to the target moment and the random variable into the target deep neural network to obtain a result output by the target deep neural network.
Optionally, the generating a planning trajectory of the target operation plan based on the determined multiple planning trajectory points includes:
obtaining a plurality of alternative tracks by adopting a polynomial curve interpolation mode based on the determined plurality of planning track points;
calculating a cost value of each alternative track by adopting a preset cost function;
and selecting the candidate track with the minimum cost value as a planning track of the target operation plan.
According to a second aspect of the embodiments of the present application, there is provided an apparatus for planning a trajectory of an unmanned aerial vehicle, including:
the first obtaining module is used for obtaining a first constraint condition corresponding to a time sequence of the target operation plan; and
the second obtaining module is used for obtaining a second constraint condition corresponding to each target moment except the last moment in the moment sequence;
an execution module, configured to, for each target time, perform the following operations: executing a first operation or a second operation based on the first constraint condition and a second constraint condition corresponding to the target time to determine a planned track point corresponding to the next time of the target time in the time sequence; the first operation is executed based on a pre-trained target deep neural network, and the second operation is executed based on a preset rule;
and the generating module is used for generating the planning track of the target operation plan based on the plurality of determined planning track points.
According to a third aspect of embodiments herein, there is provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements the method of any one of the above first aspects.
According to a fourth aspect of embodiments of the present application, there is provided an unmanned aerial vehicle comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the program implementing the method of any one of the first aspect above.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
according to the method and the device for planning the operation track of the unmanned equipment, the first constraint condition corresponding to the time sequence of the target operation planning is obtained, and the second constraint condition corresponding to each target time except the last time in the time sequence is obtained. For each target time, the following operations are executed: and executing a first operation or a second operation based on the first constraint condition and a second constraint condition corresponding to the target time to determine a planned track point corresponding to the next time of the target time in the time sequence. And executing a first operation based on a pre-trained target deep neural network, and executing a second operation based on a preset rule. And generating a planning track of the target operation plan based on the plurality of determined planning track points. Because the first operation executed based on the target deep neural network and the second operation executed based on the preset rule are combined in the embodiment, the advantage of the preset rule can be exerted when operation planning is carried out, and meanwhile, the problem of poor flexibility caused by excessive dependence on the preset rule is avoided. In addition, when the planning track points are determined, the first constraint condition corresponding to the time sequence and the second constraint condition corresponding to each target time in the time sequence are considered, so that the operation planning result is more reasonable.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart illustrating a method for planning an unmanned aerial vehicle trajectory according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart illustrating another method for unmanned device trajectory planning according to an exemplary embodiment of the present application;
FIG. 3 is a flow chart illustrating another method for unmanned device trajectory planning according to an exemplary embodiment of the present application;
FIG. 4 is a block diagram of an apparatus for planning an unmanned aerial vehicle trajectory according to an exemplary embodiment of the present application;
FIG. 5 is a block diagram of another unmanned aerial device trajectory planning apparatus illustrated in accordance with an exemplary embodiment of the present application;
FIG. 6 is a block diagram of another unmanned aerial device trajectory planning apparatus shown in accordance with an exemplary embodiment of the present application;
FIG. 7 is a schematic diagram of an unmanned device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
As shown in fig. 1, fig. 1 is a flowchart illustrating a method for planning a trajectory of an unmanned aerial vehicle according to an exemplary embodiment, which may be applied to an unmanned aerial vehicle. Those skilled in the art will appreciate that the drone may include, but is not limited to, an unmanned vehicle, an unmanned robot, a drone, an unmanned ship, and the like. The method comprises the following steps:
in step 101, a first constraint condition corresponding to a time sequence of the target operation plan is obtained.
Generally, when the unmanned device is running, a running path in a preset time period in the future (for example, a running path in N seconds from the current time point) needs to be planned in advance according to information of a current environment where the unmanned device is located, and the planned path is used as a planned track of the running plan. And then, driving decision is made according to the planned track of the operation plan. Specifically, usually, track points corresponding to a plurality of preset moments in a future preset time period are predicted according to information of an environment where the track points are located, and then the track points are connected into a track to obtain a planned track of the operation plan.
In this embodiment, the target operation plan is an operation plan currently performed for the unmanned device, and an operation path within a future preset time period may be obtained, where the future preset time period may be a time period from the current time to the end of N seconds. N may be any reasonable positive integer, and the specific value of N is not limited in the present application. The time sequence of the target operation plan is a sequence of a series of preset times in the future preset period, and the time sequence may include a starting time (i.e., the current time), a plurality of intermediate times and a last time (i.e., a time at the end of N seconds).
In this embodiment, the first constraint condition corresponding to the time sequence of the target operation plan can represent an influence factor of the time sequence on the target operation plan. The first constraint may include a motion state parameter of the target device at a last time in the sequence of times and a size parameter of an obstacle detected by the target device.
The target equipment is unmanned equipment for performing target operation planning. The motion state parameters of the target device may include, but are not limited to, relative displacement, speed, acceleration of the target device in the longitudinal direction of the road, relative displacement, speed, acceleration of the target device in the lateral direction of the road, and the type of lane in which the target device is located. The size parameter of the obstacle detected by the target device is the size parameter of each obstacle in one or more obstacles currently detected by the target device through the sensor. The dimension parameters of the obstacle may include, but are not limited to, a length parameter, a width parameter, a height parameter, etc. of the obstacle.
Optionally, the first constraint condition may further include a size parameter of the target device in addition to the motion state parameter of the target device at the last time in the time sequence and the size parameter of the obstacle detected by the target device. The size parameters of the target device may include, but are not limited to, a length parameter, a width parameter, a height parameter, and the like of the target device. Since the size parameter of the target device may have a certain influence on the target operation plan, if the influence of the size parameter of the target device on the target operation plan can be considered during the target operation plan, the operation plan result can be more reasonable.
In step 102, a second constraint condition corresponding to each target time except the last time in the time sequence is obtained.
In this embodiment, the target time is any time except the last time in the time sequence, and the second constraint condition corresponding to each target time may be obtained. Aiming at any one target time, the second constraint condition corresponding to the target time can represent the influence factor of the target time on the target operation planning. The second constraint condition corresponding to the target time may include a motion state parameter of the target device at the target time, a motion state parameter of the obstacle detected by the target device at the target time, and a motion state parameter of the obstacle at each time after the target time in the time sequence.
For example, the time sequence may include a start time a, an intermediate time B, C, D, E, and an end time F. The time A, B, C, D, E are all target times, and a second constraint condition corresponding to each time in the time A, B, C, D, E may be obtained. The second constraint condition corresponding to the time a may include a motion state parameter of the target device at the time a, a motion state parameter of the obstacle at the time a, and a motion state parameter of the obstacle at the time B, C, D, E, F (the time B, C, D, E, F is each time after the time a in the time sequence). The second constraint condition corresponding to the time B may include the motion state parameter of the target device at the time B, the motion state parameter of the obstacle at the time B, and the motion state parameters of the obstacle at the respective times C, D, E, F. By analogy, the second constraint condition corresponding to the time point … … E may include the motion state parameter of the target device at the time point E, the motion state parameter of the obstacle at the time point E, and the motion state parameter of the obstacle at the time point F.
The motion state parameters of the obstacle may include, but are not limited to, a speed and an acceleration of the obstacle in a longitudinal direction of the road, a speed and an acceleration of the obstacle in a transverse direction of the road, a distance between the obstacle and the target device, a relative angle between the obstacle and the target device, a lane type in which the obstacle is located, and the like.
It should be noted that, before acquiring the first constraint condition and the second constraint condition, the current motion state parameter of the target device and the current motion state parameter of the obstacle may be acquired first. Then, based on the current motion state parameters of the target device and the obstacle, the motion state parameters of the target device and the obstacle at each moment in the time sequence are respectively estimated to determine a first constraint condition and a second constraint condition. For example, the motion state parameter may be estimated by using a preset algorithm, or may be estimated by using a machine learning method. It is to be understood that any method known in the art and that may occur in the future that is capable of estimating the above-described state of motion parameter may be applied to the present application, and the present application is not limited in the particular manner of estimating the above-described state of motion parameter.
In step 103, for each target time, the following operations are performed: and executing a first operation or a second operation based on the first constraint condition and a second constraint condition corresponding to the target time to determine a planned track point corresponding to the next time of the target time in the time sequence.
In this embodiment, the planned trajectory point corresponding to the next time of each target time in the time sequence may be determined based on the first constraint condition corresponding to the time sequence and the second constraint condition corresponding to each target time in the time sequence, so as to obtain a plurality of planned trajectory points. And determining a planned track point corresponding to the next moment of the target moment in the moment sequence based on the first constraint condition and the second constraint condition corresponding to the target moment for each target moment.
For example, the time sequence may include a start time a, an intermediate time B, C, D, E, and an end time F. The planned track point corresponding to the time B may be determined based on the first constraint condition corresponding to the time sequence and the second constraint condition corresponding to the time a. And determining a planning track point corresponding to the time C based on the first constraint condition corresponding to the time sequence and the second constraint condition corresponding to the time B. By analogy, … … determines the planned trajectory point corresponding to the time F based on the first constraint condition corresponding to the time sequence and the second constraint condition corresponding to the time E.
Specifically, for any one target time, the planned trajectory point corresponding to the next time of the target time in the time sequence may be determined as follows: and determining a planned track point corresponding to the next moment of the target moment in the time sequence by executing a first operation or determining a planned track point corresponding to the next moment of the target moment in the time sequence by executing a second operation based on the first constraint condition and a second constraint condition corresponding to the target moment. The first operation can be executed based on a pre-trained target deep neural network, and the second operation can be executed based on a preset rule.
In this embodiment, any one of the first operation and the second operation may be performed randomly, may be performed at intervals according to a preset rule, and may be performed according to a preset probability. It will be appreciated that the first operation or the second operation may be performed in any reasonable manner, and the present application is not limited in this respect.
Optionally, for any target time, the first operation may be performed based on a pre-trained target deep neural network by: and taking a standard normally distributed random number as a random variable, and inputting a first constraint condition corresponding to the time sequence, a second constraint condition corresponding to the target time and the random variable into the target deep neural network to obtain a result output by the target deep neural network. The first operation is executed in the mode, and the obtained planning track point is more accurate due to the introduction of the random variable.
Alternatively, a preset rule may be set empirically to obtain a rule function, and an independent variable of the rule function may be a constraint condition, and a variable may be a planned trajectory point. For any target moment, the second operation can be executed based on a preset rule in the following way: and inputting a first constraint condition corresponding to the time sequence and a second constraint condition corresponding to the target time as independent variables into the rule function to obtain a planned track point output by the rule function and corresponding to the next time of the target time.
In step 104, a planned trajectory of the target operation plan is generated based on the determined plurality of planned trajectory points.
In this embodiment, a planned trajectory of the target operation plan may be generated according to a planned trajectory point corresponding to each time in the time sequence. Specifically, first, a plurality of candidate tracks may be obtained by a polynomial curve interpolation method based on a plurality of planned track points. And then, calculating the cost value of each alternative track by adopting a preset cost function, and selecting the alternative track with the minimum cost value as a planning track of the target operation plan. It will be appreciated that the planned trajectory of the target operational plan may also be generated in any other reasonable manner. Any method known in the art and that may occur in the future that is capable of generating a planned trajectory for a target operation plan may be applied to the present application, which is not limited in terms of the particular manner in which the planned trajectory for the target operation plan is generated.
It should be noted that although in the above-described embodiment of fig. 1, the operations of the methods of the present application are described in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, in order to achieve desirable results. Rather, the steps depicted in the flowcharts may change the order of execution. For example, step 101 may be performed before step 102, may be performed after step 102, or may be performed simultaneously with step 102. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
According to the method for planning the operation track of the unmanned aerial vehicle, provided by the embodiment of the application, the first constraint condition corresponding to the time sequence of the target operation planning is obtained, and the second constraint condition corresponding to each target time except the last time in the time sequence is obtained. For each target time, the following operations are executed: and executing a first operation or a second operation based on the first constraint condition and a second constraint condition corresponding to the target time to determine a planned track point corresponding to the next time of the target time in the time sequence. And executing a first operation based on a pre-trained target deep neural network, and executing a second operation based on a preset rule. And generating a planning track of the target operation plan based on the plurality of determined planning track points. Because the first operation executed based on the target deep neural network and the second operation executed based on the preset rule are combined in the embodiment, the advantage of the preset rule can be exerted when operation planning is carried out, and meanwhile, the problem of poor flexibility caused by excessive dependence on the preset rule is avoided. In addition, in the embodiment, when the planned track points are determined, the first constraint condition corresponding to the time sequence and the second constraint condition corresponding to each target time in the time sequence are considered, so that the operation planning result is more reasonable.
Fig. 2 is a flow chart illustrating another method for planning a trajectory of an unmanned aerial vehicle according to an exemplary embodiment, as shown in fig. 2, which describes a process for determining a motion state parameter of a target device and an obstacle, and the method can be applied to the unmanned aerial vehicle, and includes the following steps:
in step 201, current motion state parameters of the target device and the obstacle are determined respectively.
In this embodiment, the current motion state parameter of the target device may be determined by a sensor mounted on the target device or an inertial navigation system or other devices. And determining the current motion state parameters of the obstacle by a radar or a sensor and other devices installed on the target equipment. It is to be understood that the present application is not limited in the particular manner of determining the current motion state parameters of the target device and the obstacle.
In step 202, motion state parameters of the target device and the obstacle at each moment in the time sequence of the target operation plan are respectively estimated based on the current motion state parameters of the target device and the obstacle.
In this embodiment, the motion state parameter may be estimated by using a preset algorithm, or may be estimated by using a machine learning method. It is to be understood that any method known in the art and that may occur in the future that is capable of estimating the above-described state of motion parameter may be applied to the present application, and the present application is not limited in the particular manner of estimating the above-described state of motion parameter.
In step 203, a first constraint condition corresponding to the time sequence of the target operation plan is obtained, where the first constraint condition includes a motion state parameter of the target device at a last time in the time sequence and a size parameter of an obstacle detected by the target device.
In step 204, second constraint conditions corresponding to each target time except the last time in the time sequence are obtained, where the second constraint conditions corresponding to any one target time include motion state parameters of the target device and the obstacle at the target time and motion state parameters of the obstacle at each time after the target time in the time sequence.
In step 205, for each target time, the following operations are performed: and executing a first operation or a second operation based on the first constraint condition and a second constraint condition corresponding to the target moment so as to determine a planned track point corresponding to the next moment of the target moment in the moment sequence.
In step 206, a planned trajectory of the target operation plan is generated based on the determined plurality of planned trajectory points.
It should be noted that, for the same steps as in the embodiment of fig. 1, details are not repeated in the embodiment of fig. 2, and related contents may refer to the embodiment of fig. 1.
The method for planning the operation trajectory of the unmanned aerial vehicle provided by the above embodiment of the application determines current motion state parameters of the target device and the obstacle, and estimates motion state parameters of the target device and the obstacle at each moment in a time sequence of the target operation plan based on the current motion state parameters of the target device and the obstacle. And acquiring a first constraint condition corresponding to a time sequence of the target operation plan, wherein the first constraint condition comprises a motion state parameter of the target equipment at the last time in the time sequence and a size parameter of the detected obstacle. And acquiring a second constraint condition corresponding to each target time except the last time in the time sequence, wherein the second constraint condition corresponding to any one target time comprises the motion state parameters of the target equipment and the obstacle at the target time and the motion state parameters of the obstacle at each time after the target time in the time sequence. For each target time, the following operations are executed: and executing a first operation or a second operation based on the first constraint condition and a second constraint condition corresponding to the target time to determine a planned track point corresponding to the next time of the target time in the time sequence. And generating a planning track of the target operation plan based on the plurality of determined planning track points. Thereby further improving the rationality of the results of the operational planning.
In some alternative embodiments, the first operation or the second operation may be performed by: and executing the first operation or the second operation according to the preset execution probability. Wherein the execution probability comprises a first probability of executing the first operation and a second probability of executing the second operation, and the sum of the first probability and the second probability is 1.
In this embodiment, corresponding execution probabilities including a first probability of executing the first operation and a second probability of executing the second operation may be set in advance for the first operation and the second operation, respectively, according to the actual situation. For example, a first probability set for a first operation is m, a second probability set for a second operation is n, and m + n is 1. The sizes of m and n can be adjusted continuously according to experience. When the operation planning is performed, the first operation or the second operation may be performed according to a preset execution probability.
Since the present embodiment executes the first operation or the second operation according to the predetermined execution probability, and effectively controls the execution ratio of the first operation and the second operation, it is helpful to exert the advantages of the preset rule and avoid the problem of poor flexibility caused by excessive dependence on the preset rule.
Fig. 3 is a flow chart illustrating another method for planning a trajectory for an unmanned aerial vehicle according to an exemplary embodiment, as shown in fig. 3, which describes in detail a process for generating a planned trajectory for a target operation plan, and which may be applied to an unmanned aerial vehicle, and includes the steps of:
in step 301, a first constraint condition corresponding to a time sequence of the target operation plan is obtained.
In step 302, a second constraint condition corresponding to each target time except the last time in the time sequence is obtained.
In step 303, for each target time, the following operations are performed: and executing a first operation or a second operation based on the first constraint condition and a second constraint condition corresponding to the target time to determine a planned track point corresponding to the next time of the target time in the time sequence.
In step 304, a plurality of candidate tracks are obtained by means of polynomial curve interpolation based on the determined plurality of planned track points.
Optionally, a 5 th-order polynomial curve interpolation mode may be adopted to obtain multiple candidate tracks.
In step 305, a cost value of each candidate trajectory is calculated by using a preset cost function.
In this embodiment, the preset cost function may be any reasonable cost function, and the specific form of the cost function is not limited in this application.
In step 306, the candidate trajectory with the smallest cost value is selected as the planning trajectory of the target operation plan.
It should be noted that, for the same steps as in the embodiment of fig. 1 and fig. 2, details are not repeated in the embodiment of fig. 3, and related contents may refer to the embodiment of fig. 1 and fig. 2.
According to the method for planning the operation track of the unmanned aerial vehicle, multiple candidate tracks are obtained by adopting a polynomial curve interpolation mode based on the multiple determined planning track points, the cost value of each candidate track is calculated by adopting a preset cost function, and the candidate track with the minimum cost value is selected as the planning track of the target operation plan. Therefore, the completeness of the planned track is improved, the planned track is smoother, and the driving behavior habit of human beings is better met.
Corresponding to the embodiment of the method for planning the operation track of the unmanned equipment, the application also provides an embodiment of a device for planning the operation track of the unmanned equipment.
As shown in fig. 4, fig. 4 is a block diagram of an apparatus for planning a trajectory of an unmanned aerial vehicle according to an exemplary embodiment of the present application, where the apparatus may include: a first obtaining module 401, a second obtaining module 402, an executing module 403 and a generating module 404.
The first obtaining module 401 is configured to obtain a first constraint condition corresponding to a time sequence of the target operation plan.
A second obtaining module 402, configured to obtain a second constraint condition corresponding to each target time except the last time in the time sequence.
An executing module 403, configured to, for each target time, perform the following operations: and executing a first operation or a second operation based on a first constraint condition corresponding to the time sequence and a second constraint condition corresponding to the target time so as to determine a planned track point corresponding to the next time of the target time in the time sequence. And executing a first operation based on a pre-trained target deep neural network, and executing a second operation based on a preset rule.
And a generating module 404, configured to generate a planned trajectory of the target operation plan based on the determined multiple planned trajectory points.
In some optional embodiments, the first constraint condition corresponding to the time sequence of the target operation plan may include: the motion state parameter of the target device at the last moment in the time sequence; a size parameter of an obstacle detected by the target device.
For any target time, the second constraint condition corresponding to the target time comprises: the motion state parameter of the target equipment at the target moment; the motion state parameter of the obstacle at the target moment; and a motion state parameter of the obstacle at each time instant after the target time instant in the time instant sequence.
In other optional embodiments, the first constraint condition corresponding to the time sequence of the target operation plan may further include: a size parameter of the target device.
As shown in fig. 5, fig. 5 is a block diagram of another apparatus for planning a trajectory of an unmanned aerial vehicle according to an exemplary embodiment of the present application, where the apparatus may further include, on the basis of the foregoing embodiment shown in fig. 4: a determination module 405 and a prediction module 406.
The determining module 405 is configured to determine current motion state parameters of the target device and the obstacle, respectively.
And the estimating module 406 is configured to estimate the motion state parameters of the target device and the obstacle at each time in the time sequence based on the current motion state parameters of the target device and the obstacle.
In further alternative embodiments, the execution module 403 is configured to: and executing the first operation or the second operation according to the preset execution probability. Wherein the execution probability comprises a first probability of executing the first operation and a second probability of executing the second operation, and the sum of the first probability and the second probability is 1.
In other alternative embodiments, for any target time, the execution module 403 may execute the first operation based on a pre-trained target deep neural network by: and taking a standard normally distributed random number as a random variable, and inputting the first constraint condition, the second constraint condition corresponding to the target moment and the random variable into the target deep neural network to obtain a result output by the target deep neural network.
As shown in fig. 6, fig. 6 is a block diagram of another apparatus for planning a trajectory of an unmanned aerial vehicle according to an exemplary embodiment of the present application, where on the basis of the foregoing embodiment shown in fig. 4, the generating module 404 may include: a determination sub-module 601, a calculation sub-module 602 and a selection sub-module 603.
The determining submodule 601 is configured to obtain multiple candidate tracks by adopting a polynomial curve interpolation mode based on the multiple determined planned track points.
And the calculating submodule 602 is configured to calculate a cost value of each candidate trajectory by using a preset cost function.
And the selecting submodule 603 is configured to select the candidate trajectory with the smallest cost value as the planning trajectory of the target operation plan.
It should be understood that the above-mentioned means may be preset in the unmanned device, or may be loaded into the unmanned device by means of downloading or the like. The corresponding modules in the device can be matched with the modules in the unmanned equipment to realize a planning scheme of the operation track of the unmanned equipment.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement without inventive effort.
An embodiment of the present application further provides a computer-readable storage medium, where the storage medium stores a computer program, and the computer program may be used to execute the method for planning the operation trajectory of the unmanned aerial vehicle provided in any one of the embodiments of fig. 1 to fig. 3.
Corresponding to the above method for planning the operation trajectory of the unmanned aerial vehicle, an embodiment of the present application further provides a schematic structural diagram of the unmanned aerial vehicle according to an exemplary embodiment of the present application, shown in fig. 7. Referring to fig. 7, at the hardware level, the drone includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, although it may also include hardware required for other services. And the processor reads the corresponding computer program from the nonvolatile memory to the memory and then runs the computer program to form a planning device of the unmanned equipment running track on a logic level. Of course, besides the software implementation, the present application does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (9)

1. A method for planning an operation track of unmanned equipment is characterized by comprising the following steps:
acquiring a first constraint condition corresponding to a time sequence of a target operation plan; the first constraint condition comprises a motion state parameter of the target device at the last moment in the time moment sequence; and
acquiring a second constraint condition corresponding to each target moment except the last moment in the moment sequence;
for each target time, performing the following operations: executing a first operation or a second operation based on the first constraint condition and a second constraint condition corresponding to the target time to determine a planned track point corresponding to the next time of the target time in the time sequence; the first operation is executed based on a pre-trained target deep neural network, and the second operation is executed based on a preset rule;
generating a planning track of the target operation plan based on the determined plurality of planning track points;
the performing the first operation or the second operation includes:
executing a first operation or a second operation according to a preset execution probability; wherein the execution probability comprises a first probability of executing the first operation and a second probability of executing the second operation; the sum of the first probability and the second probability is 1.
2. The method of claim 1, wherein the first constraint further comprises:
a size parameter of an obstacle detected by the target device;
for any target time, the second constraint condition corresponding to the target time comprises:
the motion state parameter of the target equipment at the target moment;
the motion state parameter of the obstacle at the target moment; and
a motion state parameter of the obstacle at each time instant in the time instant sequence after the target time instant.
3. The method of claim 2, wherein the first constraint further comprises: a size parameter of the target device.
4. The method of claim 2, further comprising, prior to obtaining the first constraint and obtaining the second constraint:
respectively determining current motion state parameters of the target device and the obstacle;
and respectively estimating the motion state parameters of the target device and the obstacle at each moment in the time sequence based on the current motion state parameters of the target device and the obstacle.
5. The method according to any of claims 1-4, characterized in that for any target time instant, the first operation is performed based on a pre-trained target deep neural network by:
taking a standard normally distributed random number as a random variable;
and inputting the first constraint condition, the second constraint condition corresponding to the target moment and the random variable into the target deep neural network to obtain a result output by the target deep neural network.
6. The method according to any one of claims 1-4, wherein generating the planned trajectory of the target operational plan based on the determined plurality of planned trajectory points comprises:
based on the determined multiple planning track points, multiple alternative tracks are obtained in a polynomial curve interpolation mode;
calculating a cost value of each alternative track by adopting a preset cost function;
and selecting the candidate track with the minimum cost value as a planning track of the target operation plan.
7. An apparatus for planning a trajectory of an unmanned aerial vehicle, the apparatus comprising:
the first obtaining module is used for obtaining a first constraint condition corresponding to a time sequence of the target operation plan; the first constraint condition comprises a motion state parameter of the target device at the last moment in the time moment sequence; and
the second obtaining module is used for obtaining a second constraint condition corresponding to each target moment except the last moment in the moment sequence;
an execution module, configured to, for each target time, perform the following operations: executing a first operation or a second operation based on the first constraint condition and a second constraint condition corresponding to the target moment so as to determine a planned track point corresponding to the next moment of the target moment in the moment sequence; the first operation is executed based on a pre-trained target deep neural network, and the second operation is executed based on a preset rule;
the generating module is used for generating a planning track of the target operation plan based on the determined multiple planning track points;
the execution module is specifically used for executing a first operation or a second operation according to a preset execution probability; wherein the execution probability comprises a first probability of executing the first operation and a second probability of executing the second operation; the sum of the first probability and the second probability is 1.
8. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when being executed by a processor, carries out the method of any of the preceding claims 1-6.
9. An unmanned aerial device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method of any of claims 1-6.
CN201910462102.8A 2019-05-30 2019-05-30 Unmanned equipment running track planning method and device and unmanned equipment Active CN110187707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910462102.8A CN110187707B (en) 2019-05-30 2019-05-30 Unmanned equipment running track planning method and device and unmanned equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910462102.8A CN110187707B (en) 2019-05-30 2019-05-30 Unmanned equipment running track planning method and device and unmanned equipment

Publications (2)

Publication Number Publication Date
CN110187707A CN110187707A (en) 2019-08-30
CN110187707B true CN110187707B (en) 2022-05-10

Family

ID=67718883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910462102.8A Active CN110187707B (en) 2019-05-30 2019-05-30 Unmanned equipment running track planning method and device and unmanned equipment

Country Status (1)

Country Link
CN (1) CN110187707B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111123927A (en) * 2019-12-20 2020-05-08 北京三快在线科技有限公司 Trajectory planning method and device, automatic driving equipment and storage medium
CN111079721B (en) * 2020-03-23 2020-07-03 北京三快在线科技有限公司 Method and device for predicting track of obstacle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107357168A (en) * 2017-06-01 2017-11-17 同济大学 A kind of unmanned vehicle barrier-avoiding method based on Chance-constrained Model PREDICTIVE CONTROL
CN108068815A (en) * 2016-11-14 2018-05-25 百度(美国)有限责任公司 System is improved for the decision-making based on planning feedback of automatic driving vehicle
JP2018177074A (en) * 2017-04-18 2018-11-15 国立大学法人 東京大学 Autonomous type underwater robot and control method for the same
CN108983813A (en) * 2018-07-27 2018-12-11 长春草莓科技有限公司 A kind of unmanned plane during flying preventing collision method and system
CN109684702A (en) * 2018-12-17 2019-04-26 清华大学 Driving Risk Identification based on trajectory predictions
CN109782779A (en) * 2019-03-19 2019-05-21 电子科技大学 AUV paths planning method under ocean current environment based on population meta-heuristic algorithms

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106114507B (en) * 2016-06-21 2018-04-03 百度在线网络技术(北京)有限公司 Local path planning method and device for intelligent vehicle
CN109937343B (en) * 2017-06-22 2023-06-13 百度时代网络技术(北京)有限公司 Evaluation framework for prediction trajectories in automated driving vehicle traffic prediction
US10571921B2 (en) * 2017-09-18 2020-02-25 Baidu Usa Llc Path optimization based on constrained smoothing spline for autonomous driving vehicles

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108068815A (en) * 2016-11-14 2018-05-25 百度(美国)有限责任公司 System is improved for the decision-making based on planning feedback of automatic driving vehicle
JP2018177074A (en) * 2017-04-18 2018-11-15 国立大学法人 東京大学 Autonomous type underwater robot and control method for the same
CN107357168A (en) * 2017-06-01 2017-11-17 同济大学 A kind of unmanned vehicle barrier-avoiding method based on Chance-constrained Model PREDICTIVE CONTROL
CN108983813A (en) * 2018-07-27 2018-12-11 长春草莓科技有限公司 A kind of unmanned plane during flying preventing collision method and system
CN109684702A (en) * 2018-12-17 2019-04-26 清华大学 Driving Risk Identification based on trajectory predictions
CN109782779A (en) * 2019-03-19 2019-05-21 电子科技大学 AUV paths planning method under ocean current environment based on population meta-heuristic algorithms

Also Published As

Publication number Publication date
CN110187707A (en) 2019-08-30

Similar Documents

Publication Publication Date Title
Bhattacharyya et al. Multi-agent imitation learning for driving simulation
EP3805073B1 (en) Automated vehicular lane changing method and apparatus
US11900797B2 (en) Autonomous vehicle planning
CN108692734B (en) Path planning method and device
CN110316193B (en) Preview distance setting method, device, equipment and computer readable storage medium
CN108288096B (en) Method and device for estimating travel time and training model
CN110275531B (en) Obstacle trajectory prediction method and device and unmanned equipment
CN108973997B (en) Travel track determination device and automatic steering device
EP1733287B1 (en) System and method for adaptive path planning
JP2022506404A (en) Methods and devices for determining vehicle speed
CN111907521B (en) Transverse control method and device for automatic driving vehicle and storage medium
CN110779530B (en) Vehicle route generation method, device and storage medium
CN110187707B (en) Unmanned equipment running track planning method and device and unmanned equipment
CN108121347B (en) Method and device for controlling movement of equipment and electronic equipment
CN112394725B (en) Prediction and reaction field of view based planning for autopilot
CN110764518A (en) Underwater dredging robot path planning method and device, robot and storage medium
CN110057369B (en) Operation planning method and device of unmanned equipment and unmanned equipment
JP7058761B2 (en) Mobile control device, mobile control learning device, and mobile control method
CN114104005B (en) Decision-making method, device and equipment of automatic driving equipment and readable storage medium
JP7373448B2 (en) Self-location estimation method and self-location estimation device
KR102427366B1 (en) Lane estimation method and apparatus using deep neural network
CN115690839A (en) Behavior decision method and device, electronic equipment and storage medium
US20220097727A1 (en) Behavioral planning in autonomus vehicle
CN112721916A (en) Parking control method and device
CN111323015B (en) Method and device for estimating travel information and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant