CN111912423B - Method and device for predicting obstacle trajectory and training model - Google Patents

Method and device for predicting obstacle trajectory and training model Download PDF

Info

Publication number
CN111912423B
CN111912423B CN202011087583.8A CN202011087583A CN111912423B CN 111912423 B CN111912423 B CN 111912423B CN 202011087583 A CN202011087583 A CN 202011087583A CN 111912423 B CN111912423 B CN 111912423B
Authority
CN
China
Prior art keywords
obstacle
information
information type
track
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011087583.8A
Other languages
Chinese (zh)
Other versions
CN111912423A (en
Inventor
樊明宇
任冬淳
周浩
夏华夏
朱炎亮
钱德恒
杨旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202011087583.8A priority Critical patent/CN111912423B/en
Priority to CN202011454164.3A priority patent/CN112629550B/en
Publication of CN111912423A publication Critical patent/CN111912423A/en
Application granted granted Critical
Publication of CN111912423B publication Critical patent/CN111912423B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The specification discloses a method and a device for predicting a track of an obstacle and training a model, wherein unmanned equipment can determine a driving scene where the unmanned equipment is located at present, determine the prediction duration of the prediction track of each obstacle required when planning the track of the unmanned equipment, select at least one information type from preset information types according to the driving scene, the prediction duration and the like, acquire the information of the information type of each obstacle according to each selected information type, determine the characteristic of each obstacle corresponding to the information type according to the acquired information, and determine the prediction track of each obstacle within the prediction duration according to the current position of the obstacle and the determined characteristic of each information type. The information type can be selected, the effect of predicting the track of the obstacle is improved based on the information of the selected information type, and the calculation force consumed by the unmanned equipment for predicting the track of the obstacle and the accuracy relation of the track of the obstacle can be balanced.

Description

Method and device for predicting obstacle trajectory and training model
Technical Field
The specification relates to the technical field of unmanned driving, in particular to a method and a device for predicting obstacle trajectories and training a model.
Background
At present, when unmanned equipment runs, the track of each obstacle in the surrounding environment can be predicted, the future track of the unmanned equipment is reasonably planned based on the predicted track of each obstacle, and the control quantity is judged and adjusted in advance.
Generally, an unmanned device can acquire information such as a historical trajectory of an obstacle, an electronic map, traffic lights, weather, and pedestrian postures, and predict the trajectory of the obstacle based on the acquired information, the more accurate the predicted trajectory of the obstacle, and the longer the period of time for reliable prediction. However, the more information is acquired, the larger the amount of calculation required by the unmanned aerial vehicle, and the longer it takes to predict the trajectory of the obstacle. Therefore, the input information type of the prediction model needs to be dynamically adjusted according to the requirements of the upstream and downstream functional modules on prediction accuracy and prediction frequency.
Disclosure of Invention
The embodiments of the present disclosure provide a method and an apparatus for predicting an obstacle trajectory and training a model, so as to partially solve the above problems in the prior art.
The embodiment of the specification adopts the following technical scheme:
the present specification provides a method for predicting an obstacle trajectory, the method comprising:
determining a driving scene where the unmanned equipment is located currently, and determining the prediction duration of the prediction track of each obstacle required when planning the track of the unmanned equipment;
selecting at least one information type from a plurality of preset information types according to the driving scene and/or the predicted duration;
acquiring the information of the information type of each obstacle aiming at each selected information type, and determining the characteristic of each obstacle corresponding to the information type according to the acquired information of the information type of each obstacle;
and for each obstacle, determining a predicted track of the obstacle within the predicted time length according to the current position of the obstacle and the determined characteristics of each obstacle corresponding to each selected information type.
Optionally, according to the driving scenario, at least one information type is selected from a plurality of preset information types, and specifically includes:
acquiring a corresponding relation between each driving scene and each information type which are determined in advance;
and selecting at least one information type from the information types according to the corresponding relation and the driving scene.
Optionally, the predicted duration is positively correlated with the number of selected information types.
Optionally, the selected information type includes a track information type;
acquiring the information of the information type of each obstacle specifically includes:
acquiring historical tracks of all obstacles in a past specified duration;
determining the feature of each obstacle corresponding to the information type according to the acquired information of the information type of each obstacle, specifically comprising:
determining the space interactive characteristics of each obstacle at any moment in the past specified duration according to the position information of each obstacle at the moment, wherein the space interactive characteristics represent the interactive information of each obstacle at the moment;
and determining the space-time interaction characteristics of each barrier at the current moment according to the space interaction characteristics of each barrier at each moment in the past specified duration, wherein the space-time interaction characteristics represent the interaction information of each barrier in the past specified duration.
Optionally, the selected information type comprises a pedestrian information type;
acquiring the information of the information type of each obstacle specifically includes:
determining a number of pedestrian-type obstacles in an environment surrounding the drone;
acquiring action information of obstacles of various pedestrian types;
determining the feature of each obstacle corresponding to the information type according to the acquired information of the information type of each obstacle, specifically comprising:
for each obstacle of the pedestrian type, determining the action characteristics of the obstacle of the pedestrian type according to the action information of the obstacle of the pedestrian type;
and determining the global action characteristics of the obstacles of the pedestrian types according to the action characteristics of the obstacles of the pedestrian types, wherein the global action characteristics are used for representing the interactive information of the obstacles of the pedestrian types.
Optionally, determining a predicted trajectory of the obstacle within the predicted duration according to the current position of the obstacle and the determined characteristics of the obstacle corresponding to the selected information types, specifically including:
processing the characteristics of each obstacle corresponding to each selected information type according to the characteristics of each determined obstacle corresponding to each selected information type to obtain guide characteristics;
and determining the predicted track of the obstacle within the predicted time length according to the current position of the obstacle and the guide characteristics.
Optionally, determining a predicted trajectory of the obstacle within the predicted duration according to the current position of the obstacle and the guidance feature specifically includes:
determining the individual characteristics of the obstacle according to the current position of the obstacle and the guide characteristics;
and inputting the individual characteristics, the guide characteristics and the current position of the obstacle into a pre-trained prediction model to obtain the predicted track of the obstacle determined by the prediction model.
The present specification provides a method of model training, the method comprising:
aiming at each obstacle, acquiring the track of the obstacle in historical specified duration and information of each preset information type of the obstacle in the specified duration;
determining a designated time, and taking the acquired track of the obstacle after the designated time as a sample track;
aiming at each training process, in the training process, at least one kind of information is selected from the track of each obstacle before the appointed time and the information of each obstacle corresponding to each information type in the appointed time length, and the selected information is input into a prediction model to be trained to obtain a prediction track to be optimized, which is output by the prediction model to be trained;
and training the prediction model to be trained according to the sample trajectory and the prediction trajectory to be optimized obtained in each training process.
The present specification provides an apparatus for predicting an obstacle trajectory, the apparatus comprising:
the system comprises a duration determining module, a track calculating module and a track planning module, wherein the duration determining module is used for determining a driving scene where the unmanned equipment is located currently and determining the prediction duration of the prediction track of each obstacle required when the track of the unmanned equipment is planned;
the selection module is used for selecting at least one information type from a plurality of preset information types according to the driving scene and/or the predicted duration;
the characteristic determining module is used for acquiring the information of the information type of each obstacle aiming at each selected information type, and determining the characteristic of each obstacle corresponding to the information type according to the acquired information of the information type of each obstacle;
and the prediction module is used for determining the predicted track of each obstacle within the prediction duration according to the current position of the obstacle and the determined characteristics of each obstacle corresponding to each selected information type.
The present specification provides an apparatus for model training, the apparatus comprising:
the acquisition module is used for acquiring the track of each obstacle within the historical specified duration and the information of each preset information type of the obstacle within the specified duration;
the time determining module is used for determining a specified time and taking the acquired track of the obstacle after the specified time as a sample track;
the input module is used for selecting at least one type of information from the track of each obstacle before the appointed time and the information of each obstacle corresponding to each information type in the appointed time in the training process, and inputting the selected information into a prediction model to be trained to obtain a prediction track to be optimized output by the prediction model to be trained;
and the training module is used for training the prediction model to be trained according to the sample track and the prediction track to be optimized obtained in each training process.
The present specification provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-described method of predicting an obstacle trajectory.
The present specification provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the method for predicting the obstacle trajectory.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects:
in this specification, the unmanned aerial vehicle may determine a driving scene where the unmanned aerial vehicle is currently located, may determine a predicted time length of a predicted trajectory of each obstacle, which is required when the unmanned aerial vehicle plans its own trajectory, selects at least one information type among a plurality of preset information types according to the driving scene and/or the predicted time length, acquires information of the information type of each obstacle for each selected information type, determines a feature of each obstacle corresponding to the information type according to the acquired information of the information type of each obstacle, and determines, for each obstacle, a predicted trajectory of the obstacle within the predicted time length according to a current position of the obstacle and the determined feature of each obstacle corresponding to each selected information type. The information type can be selected, the track effect of the predicted obstacle is improved based on the information of the selected information type, the calculation power consumed by the unmanned equipment for predicting the obstacle track and the accuracy relation of the obstacle track can be balanced, the prediction results of different upstream and downstream function/module requirements can be provided according to needs in a complex scene, and the effect of improving the accuracy of the obstacle track on the basis of guaranteeing the real-time performance of the unmanned equipment for predicting the obstacle track is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification and are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description serve to explain the specification and not to limit the specification in a non-limiting sense. In the drawings:
fig. 1 is a flowchart of a method for predicting an obstacle trajectory according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method for model training provided by embodiments of the present disclosure;
fig. 3 is a schematic structural diagram of an apparatus for predicting an obstacle trajectory according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of a model training apparatus provided in an embodiment of the present disclosure;
fig. 5 is a schematic view of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more clear, the technical solutions of the present disclosure will be clearly and completely described below with reference to the specific embodiments of the present disclosure and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without any creative effort belong to the protection scope of the present specification.
In the field of unmanned driving, unmanned equipment needs to plan own track in real time during running so as to avoid various obstacles in the surrounding environment and realize safe driving. In the surrounding environment where the unmanned device is located, the obstacles can comprise static obstacles and dynamic obstacles, so that in order to plan the track of the unmanned device better, the unmanned device can predict the track of each dynamic obstacle in the surrounding environment, and the track of the unmanned device is planned based on the predicted track of each obstacle.
In the prior art, when predicting the trajectory of an obstacle, the unmanned device may acquire information corresponding to a plurality of information types for each obstacle, where the information types may include a trajectory type, a map type, and the like, for example, may acquire a historical trajectory of each obstacle, electronic map information, and the like, and extract features of each information based on the acquisition of the information corresponding to each information type for each obstacle, thereby predicting the trajectory of the obstacle based on the extracted features. Generally, the more the number of types of information acquired by the unmanned aerial vehicle (i.e., the more the types of information acquired), the more accurate the predicted trajectory of the obstacle, and meanwhile, the greater the calculation amount required for predicting the trajectory of the obstacle, the longer the time required for predicting the trajectory of the obstacle, and since the unmanned aerial vehicle needs to plan its own trajectory in real time, the longer the time taken for the unmanned aerial vehicle to predict the trajectory of the obstacle is, which affects the planning of its own trajectory by the unmanned aerial vehicle, and thus affects the normal operation of the unmanned aerial vehicle.
Therefore, the present specification provides a method of predicting a trajectory of an obstacle, in which an information type is selected among information types in consideration of factors such as a demand for a predicted trajectory of each obstacle in a surrounding environment when an unmanned aerial vehicle plans its own trajectory and a driving scene in which the unmanned aerial vehicle is operating, information is acquired based on the selected information type, and the trajectory of the obstacle is predicted according to the acquired information. The method and the device not only ensure the requirement of real-time performance of the predicted track of the obstacle when the unmanned equipment runs, but also fully utilize computing resources to obtain the accurate predicted track of the obstacle on the basis.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for predicting an obstacle trajectory according to an embodiment of the present disclosure, which may specifically include the following steps:
s100: the method comprises the steps of determining a driving scene where the unmanned equipment is located currently, and determining prediction duration of prediction tracks of all obstacles required when the track of the unmanned equipment is planned.
Generally, prior to planning the trajectory of the drone, the trajectory of obstacles in the environment surrounding the drone may be predicted. In the present specification, the trajectory of each obstacle may be predicted by the unmanned device, or the unmanned device may transmit information necessary for predicting the trajectory of each obstacle to the server, and the server may predict the trajectory of each obstacle. The unmanned equipment mainly comprises intelligent unmanned equipment such as unmanned vehicles and unmanned aerial vehicles, and is mainly used for replacing manual goods delivery, for example, goods after being sorted are transported in a large goods storage center, or the goods are transported to another place from a certain place. The server may be an electronic device for scheduling, and may be a single device or a distributed server composed of multiple devices, which is not limited in this specification.
For convenience of description, the present specification takes an example in which the unmanned equipment predicts the trajectory of each obstacle.
First, the drone may determine the driving scenario in which it is currently located.
Specifically, the unmanned aerial vehicle may store an electronic map in advance, and divide a plurality of areas in the electronic map, with different areas corresponding to different driving scenes. For example, in the electronic map, it is determined that a driving scene corresponding to an area where an expressway is located is an expressway driving scene, and it is determined that a driving scene corresponding to an area where an intersection is located is an ordinary road driving scene.
The unmanned aerial vehicle can position the current position in real time, for example, a Global Positioning System (GPS) module installed on the unmanned aerial vehicle is used to determine the current position of the unmanned aerial vehicle, and for example, an image sensor may be installed on the unmanned aerial vehicle, image data is collected by the image sensor, and the current position of the unmanned aerial vehicle is determined by using technologies such as computer vision Positioning based on the collected image data.
The unmanned equipment can determine the area in the electronic map where the unmanned equipment is located according to each area in the electronic map and the current location, and determine the driving scene where the unmanned equipment is currently located based on the corresponding relation between each area and each driving scene.
Or, in this specification, the unmanned aerial vehicle may further acquire sensing data through an installed sensor, for example, acquire image data through an installed image sensor, acquire point cloud data through an installed lidar, and determine scene information of an environment where the unmanned aerial vehicle is located through semantic segmentation and the like based on the sensing data, and determine a driving scene where the unmanned aerial vehicle is currently located according to the scene information of the environment where the unmanned aerial vehicle is located, for example, if it is determined that the scene information of the environment where the unmanned aerial vehicle is located includes information of a traffic light, a pedestrian, and the like, it may be determined that the driving scene where the unmanned aerial vehicle is currently located is a common road driving scene.
In this specification, based on the above-provided embodiment, the driving scene where the unmanned aerial vehicle is currently located may be determined, and of course, there may be other ways to determine the driving scene where the unmanned aerial vehicle is currently located, for example, information of the driving scene where the unmanned aerial vehicle is currently located is obtained from a server, and the specification does not limit the type of the driving scene of the unmanned aerial vehicle.
Meanwhile, the unmanned equipment can also determine the prediction duration of the prediction track of each obstacle required when planning the track of the unmanned equipment.
Specifically, in this specification, the planning of the trajectory of the unmanned aerial vehicle is a planning of a travel trajectory of the unmanned aerial vehicle in a future period of time, and similarly, the predicting of the trajectory of the obstacle is a prediction of a possible travel trajectory of the obstacle in a future period of time. Therefore, the planned trajectory of the unmanned aerial vehicle and the predicted trajectory of the obstacle are both the trajectories that are not driven at the future time.
Since the predicted trajectory of the obstacle is a trajectory that is not traveled for a period of time in the future, in this specification, if the predicted trajectory of the obstacle is predicted within a specified time period in the future, the predicted time period is the specified time period, that is, if the predicted trajectory of the obstacle is predicted within N seconds, the predicted time period is N seconds.
Generally, when planning a self track of an unmanned device, the unmanned device may be divided into several modules, for example, an environment sensing module, an obstacle track prediction module, a planning module, a control module, etc., where the environment sensing module determines information of each obstacle in the environment around the unmanned device and other information such as environment information, so as to provide information of various information types to the obstacle track prediction module, the obstacle track prediction module determines a predicted track of each obstacle, the planning module determines a planned track of the unmanned device, and the control module controls the unmanned device. Therefore, the predicted time length of the predicted trajectory of each obstacle required for planning the trajectory of the unmanned aerial vehicle itself may be generally determined by a downstream module such as a planning module and a control module, and the predicted time length determined by the downstream module is sent to an obstacle trajectory prediction module, that is, the predicted time length may be determined by a module that performs subsequent processing on the predicted trajectory of the obstacle in the unmanned aerial vehicle in this specification, and the unmanned aerial vehicle may obtain the predicted time length and predict the trajectory of each obstacle based on the predicted time length.
For example, if the planning module of the unmanned device determines that a predicted trajectory of each obstacle within five seconds of the future is required, the predicted time duration is five seconds, and the unmanned device may obtain the predicted time duration of the five seconds and use the predicted time duration to predict the trajectory of each obstacle.
In addition, the downstream module may further determine a frequency of invoking the predicted trajectory of each obstacle as the predicted frequency, that is, how often the downstream module invokes the predicted trajectory of the obstacle from the obstacle trajectory prediction module, wherein the predicted frequency is inversely related to the predicted duration, that is, the higher the predicted frequency is, the shorter the predicted duration is.
Of course, in addition to obtaining the predicted time length from the subsequent processing module (e.g., the planning module), the predicted time length may also be determined in other manners in this specification, for example, determining a calculation resource of the unmanned device, determining the predicted time length according to the calculation resource, and the like.
S102: and selecting at least one information type from a plurality of preset information types according to the driving scene and/or the predicted duration.
After determining the driving scene where the unmanned aerial vehicle is currently located and the predicted duration of the predicted trajectory of each obstacle, the unmanned aerial vehicle may select at least one information type from a plurality of preset information types.
The information type may include a track information type, a weather information type, a map information type, a traffic light information type, a pedestrian information type, and the like. Of course, other information types, such as obstacle types, may be included in addition to the above-listed information types, and the number of information types is not limited in this specification. Information of each information type can be used to predict the trajectory of the obstacle.
First, at least one type of information is selected according to a driving scenario.
The unmanned equipment can acquire the corresponding relation between each driving scene and each information type which are determined in advance, and at least one information type is selected from the information types according to the corresponding relation and the driving scene where the unmanned equipment is located currently.
Specifically, the unmanned aerial vehicle may determine in advance a correspondence between each driving scene and each information type, and in general, in consideration of actual circumstances, information of different information types may contribute differently to prediction of an obstacle trajectory in different driving scenes, for example, in an expressway driving scene, information of pedestrians, traffic lights, and the like may rarely occur, whereas in an ordinary road driving scene (for example, a road intersection region), information of information types of pedestrians, traffic light information, and the like may occur with a very high probability, and therefore, information of information types of pedestrian information types, traffic light information types, and the like may occur with different probabilities in different driving scenes, and thus, a correspondence between each driving scene and each information type may be determined based on actual circumstances. For example, the information types corresponding to the common road driving scene may be set to be a pedestrian information type, a traffic light information type, a track information type, and the like.
Or, when the unmanned device determines the driving scene, scene information of an environment where the unmanned device is located may be determined through semantic segmentation and the like, if the scene information includes a road intersection, the driving scene may be corresponding to a traffic light information type, if the scene information includes pedestrian information, the driving scene may be corresponding to a pedestrian information type, if the scene information includes overpass and the like, the driving scene may be corresponding to a map information type, and if the scene information includes weather change information, the driving scene may be corresponding to a weather information type, and the like.
In summary, in this specification, a correspondence relationship between each driving scenario and each information type may be set in advance, and at least one information type may be selected from among the information types based on the current driving scenario and the correspondence relationship.
And selecting at least one information type according to the predicted duration.
After determining the predicted time duration, the unmanned device may select at least one information type among the information types according to a relationship between the predicted time duration and the number of the selected information types.
Specifically, the prediction duration is positively correlated with the number of selected information types. That is, the longer the prediction period is, the greater the number of selected information types, and the shorter the prediction period is, the smaller the number of selected information types is. Likewise, the drone may also select at least one information type based on the predicted frequency, the higher the predicted frequency, the fewer the number of information types selected, i.e., the predicted frequency is inversely related to the number of information types selected. This is because the longer the prediction time period of the predicted trajectory of the obstacle required to plan the trajectory of the unmanned aerial vehicle is, the lower the prediction frequency is, the more accurate the predicted trajectory of the required obstacle is, and the longer the time is reserved for predicting the trajectory of the obstacle, and therefore, the more information of the information type can be acquired for predicting the trajectory of the obstacle, which not only ensures the accuracy of the obstacle trajectory prediction, but also makes full use of the calculation resources and the reserved time.
In general, the value of the predicted duration has a value interval, when the predicted duration takes the minimum value, one information type can be selected, and when the predicted duration takes the maximum value, all kinds of information types can be selected. Similarly, the value of the prediction frequency also exists in a value interval, when the prediction frequency takes the minimum value, all kinds of information types can be selected, and when the prediction frequency takes the maximum value, one information type can be selected. If the predicted duration takes a certain value, after the number of the information types corresponding to the value is determined, the manner of selecting the number of the information types corresponding to the value is selected, which is not limited in this specification. For example, the prediction duration is five seconds, the number of the information types corresponding to five seconds is determined to be three, and three information types are selected.
Based on the above, the present specification may also select at least one information type among the information types according to the driving scene, the predicted time length, the predicted frequency, and the like at the same time.
Specifically, after the current driving scenario, the predicted time length, and the predicted frequency are determined, the information type of the number of the information types corresponding to the predicted time length or the information type of the number of the information types corresponding to the predicted frequency may be selected from the information types corresponding to the current driving scenario according to the number of the information types corresponding to the predicted time length and each information type corresponding to the driving scenario. According to the above example, the information types corresponding to the current driving scene are the pedestrian information type, the traffic light information type, the track information type and the like, and the number of the information types corresponding to the predicted duration is three, so that three information types can be randomly selected from the pedestrian information type, the traffic light information type, the track information type and the like, or all the information types corresponding to the current driving scene are sorted, and the three information types are selected according to the sorting structure. Or when the number of the information types corresponding to the current driving scene is less than the number of the information types corresponding to the predicted duration, the information types corresponding to the current driving scene can be selected, and a plurality of information types are randomly selected from other information types which do not correspond to the current driving scene, so that the number of the selected information types corresponds to the number of the information types corresponding to the predicted duration or the predicted frequency.
In this specification, the drone may also determine whether various information types of information can be provided based on an upstream module, such as a context awareness module, and select at least one information type based on whether each information type of information can be provided. At least one information type is selected among the information types that the upstream module is capable of providing information. Of course, if the drone selects several information types according to other information, such as driving scenes, predicted duration, etc., and the drone determines that the upstream module cannot provide one or more of the selected information types, the drone may or may not select another information type again.
In addition, at least one information type is selected from the information types according to information such as a driving scene, a predicted time length, and a predicted frequency, and the information type may be selected according to other conditions. That is, in this specification, there may be other situations to trigger the action of selecting the type of information in addition to determining the driving scene and the predicted duration. For example, a type of each obstacle may be determined, wherein the type of obstacle may include a pedestrian obstacle, a large vehicle obstacle, a small vehicle obstacle, etc., the type of information being selected according to the type of obstacle, etc. Therefore, in this specification, it is a necessary step to select at least one information type among the information types, and before the information type selection is made, the driving scene, the predicted time period, the type of each obstacle, and the like may be determined, the information type may be selected based on the information of the driving scene, the predicted time period, the type of each obstacle, and the like, and other information may be determined, and the information type may be selected based on the other information.
It should be noted that, because at least one information type is selected in this specification, each information type may be further classified to obtain an information type of a necessary class and an information type of an optional class. The information type of the mandatory category may include a track information type, and the information type of the optional category may include other information types besides the track information type according to actual situations. Of course, according to other rules, any other information type may be used as the information type of the required class, and the track information type may be used as the information type of the optional class. In addition, the present specification does not limit the number of information types included in the necessary category and the number of information types included in the optional category.
S104: and acquiring the information of the information type of each obstacle aiming at each selected information type, and determining the characteristic of each obstacle corresponding to the information type according to the acquired information of the information type of each obstacle.
After the information type is selected, information of the information type of each obstacle may be acquired for each selected information type first. And then determining the characteristics of each obstacle corresponding to the type of the information according to the acquired information of the type of each obstacle.
When the selected information type is a track information type, aiming at the track information type, the unmanned equipment can acquire historical tracks of all obstacles in a past specified time length, aiming at any time in the past specified time length, the space interaction characteristics of all the obstacles at the time are determined according to the position information of all the obstacles at the time, the space interaction characteristics represent the interaction information of all the obstacles at the time, the space-time interaction characteristics of all the obstacles at the current time are determined according to the space interaction characteristics of all the obstacles at the time in the past specified time length, and the space-time interaction characteristics represent the interaction information of all the obstacles in the past specified time length.
Specifically, for each obstacle, the unmanned device may obtain a historical track of the obstacle within the past N seconds, extract, for each time within the past N seconds, a position feature of the obstacle from position information of the obstacle at the time, perform pooling calculation on the position feature of the obstacle at the time from the position feature of the obstacle at the time, obtain a spatial interaction feature of the obstacle at the time, and so on, obtain a spatial interaction feature of the obstacle at each time within the past N seconds. The unmanned equipment can input the space interactive features of each moment in the past N seconds into a machine learning model trained in advance to obtain the space-time interactive features of each obstacle in the past N seconds of the current moment.
Among them, the machine learning model may be a Neural Network model, in particular, a Recurrent Neural Network (RNN) model such as a Long Short-Term Memory (LSTM) model, a Gated Recurrent Unit (GRU) model, and the like.
When the selected information type is a pedestrian information type, aiming at the pedestrian information type, the unmanned device can determine a plurality of pedestrian type obstacles in the surrounding environment of the unmanned device, acquire the action information of the pedestrian type obstacles, aiming at each pedestrian type obstacle, determine the action characteristics of the pedestrian type obstacle according to the action information of the pedestrian type obstacle, and determine the global action characteristics of the pedestrian type obstacle according to the action characteristics of the pedestrian type obstacle, wherein the global action characteristics are used for representing the interaction information of the pedestrian type obstacle.
Specifically, the unmanned device may determine the type of each obstacle in the surrounding environment by semantic segmentation, object detection, and the like, thereby determining an obstacle of a pedestrian type (i.e., a pedestrian). For each pedestrian-type obstacle, the unmanned device may acquire motion information of the pedestrian. In this specification, the unmanned device may sense the motion information of the pedestrian through the sensing module, and may of course acquire the motion information of the pedestrian in other existing manners. And extracting the action characteristics of the pedestrian according to the action information of the pedestrian. According to the action features of each pedestrian, the action features of each pedestrian can be fused in a pooling calculation mode and the like, and the global action features are obtained.
When the selected information type is a map information type, for each obstacle, the unmanned equipment can acquire map information in a specified range around the obstacle according to position information of the obstacle, the map information can include lane line information, road driving direction and other information, the map information is input into a Convolutional Neural Network (CNN) model, and the map information is convolved through convolution of the CNN, so that map features are acquired. Because each obstacle is an obstacle in the surrounding environment of the unmanned equipment, the map information in the specified range around each obstacle can be the same map information or the map information in the specified range around each obstacle can be spliced together to form complete map information, and the map information around each obstacle can be input into the CNN model to obtain the map characteristics.
When the selected information type is a traffic light information type, aiming at each obstacle, the unmanned equipment can determine traffic lights in the surrounding environment of the obstacle according to the position of the obstacle, the traffic light information can comprise information such as position information of the traffic lights, the state of the current traffic lights and the state holding time of the current traffic lights, when the characteristics of the traffic lights are extracted, the parameters of the traffic lights can be set, and the characteristics of the traffic lights in the surrounding environment of the obstacle are determined according to the acquired traffic light information and the traffic light parameters and serve as the characteristics of the information of the traffic light information type corresponding to the obstacle. Referring to the above description about the type of map information, in the case where there may be one or more obstacles in each obstacle that correspond to the same traffic signal together, the characteristics of the information of the traffic signal may be directly taken as the characteristics of each obstacle that corresponds to the type of traffic signal information.
When the selected information type is a weather information type, the unmanned device may obtain weather information of an environment in which each obstacle is located, and also refer to the description about the map information type, since the information of each obstacle corresponding to the weather information type is the same weather information, where the weather information may include weather types such as rain, fog, snow, fine, and the like, and the weather information may further include degrees representing the weather information such as small snow, large snow, heavy snow, and the like. The unmanned equipment can directly acquire weather information, can set weather parameters when extracting weather features, and extracts the features of the weather information as the features of the information of each obstacle corresponding to the weather information type according to the acquired weather information and the weather parameters.
Of course, in addition to the above-described manner of determining the features of the information of each information type, in this specification, a feature extraction model may be provided for each information type, and for the feature extraction model corresponding to each information type, the information of each obstacle corresponding to the information type may be input into the feature extraction model, and the features of each obstacle corresponding to the information of the information type may be obtained by the feature extraction model. The feature extraction model is a Machine learning model, and may include neural network models such as LSTM and CNN, or other Machine learning models such as Support Vector Machine (SVM) models.
As described above, a plurality of information types may be included in the present specification, and the information types such as the trajectory information type and the pedestrian information type provided above are only preferred embodiments in the present specification, and for other information types, the information corresponding to the information type of each obstacle may be input to the feature extraction model, and the feature of the information corresponding to the information type of each obstacle may be obtained. S106: and for each obstacle, determining a predicted track of the obstacle within the predicted time length according to the current position of the obstacle and the determined characteristics of each obstacle corresponding to each selected information type.
After determining the features of each obstacle corresponding to each selected information type, for each obstacle, the unmanned device may determine a predicted trajectory of each obstacle based on the information of the obstacle and the features of each obstacle corresponding to each selected information type.
First, the unmanned aerial vehicle may process the features of each obstacle corresponding to each selected information type according to the determined features of each obstacle corresponding to each selected information type, to obtain guidance features.
Specifically, the unmanned device may perform operations such as pooling and connection of the features of each obstacle corresponding to each selected information type to obtain the guidance features, and since the dimension of the feature obtained when the feature of each selected information type is subjected to the connection operation is changed, if the connection operation is performed, the feature obtained after the connection may be projected so that the dimension of the feature obtained after the connection is the same as the dimension of the feature obtained before the connection.
Then, for each obstacle, the unmanned device can acquire the current position information of the obstacle, and determine the predicted track of the obstacle within the predicted duration according to the current position of the obstacle and the guidance features.
Specifically, the individual characteristics of the obstacle are determined according to the current position of the obstacle and the guide characteristics, and the individual characteristics, the guide characteristics and the current position of the obstacle are input into a pre-trained prediction model to obtain the predicted trajectory of the obstacle determined by the prediction model.
For example, the current position characteristic of the obstacle may be determined based on the current position of the obstacle, and the guidance characteristic and the current position characteristic of the obstacle may be processed, such as summed, pooled, etc., to obtain an individual characteristic of the obstacle that characterizes the operation of the obstacle. In this specification, the individual characteristics, the guiding characteristics and the current position of the obstacle may be input into a prediction model to obtain a predicted trajectory of the obstacle within a predicted duration, wherein the prediction model may be a model such as LSTM, GRU, etc.
In addition, in the present specification, the current position of the obstacle and the characteristics of each of the determined obstacles corresponding to each of the selected information types may be input to a machine learning model trained in advance, and the predicted trajectory of the obstacle output by the machine learning model may be obtained. Alternatively, in the present specification, the acquired current position of the obstacle and information of each obstacle corresponding to each selected information type may be input to a machine learning model trained in advance, and a predicted trajectory of the obstacle output by the machine learning model may be obtained. Likewise, the machine learning model may be a model of LSTM, GRU, or the like.
The above description has been given taking an example in which the unmanned device predicts the trajectory of the obstacle, and in this specification, the server may also predict the trajectory of the obstacle. The server can determine a driving scene where the unmanned equipment is located at present, and determine the prediction duration of the prediction track of each obstacle required for planning the track of the unmanned equipment. Then, the server can select at least one information type from the information types according to the corresponding relation between the pre-stored driving scenes and the information types and/or the correlation relation between the pre-stored predicted time length value and the number of the selected information types. For each selected information type, the server can acquire the information of the information type of each obstacle, determine the characteristics of each obstacle corresponding to the information type according to the acquired information of the information type of each obstacle, and then process the characteristics of each obstacle corresponding to each information type to obtain the guidance characteristics. And inputting the information such as the current position, the guidance characteristics and the like of each obstacle into a prediction model to obtain the predicted track of the obstacle within the prediction duration output by the prediction model. For the detailed description of each step, reference may be made to the content of the trajectory of the obstacle predicted by the unmanned device, which is not described in detail here.
Based on the above, the present specification further provides a method for training a model, as shown in fig. 2, fig. 2 is a flowchart of a method for training a model provided in the real-time example of the present specification, and specifically includes the following steps:
s200: and aiming at each obstacle, acquiring the track of the obstacle in the historical specified time length and the information of the obstacle corresponding to each preset information type in the specified time length.
S202: determining a designated time, and taking the acquired track of the obstacle after the designated time as a sample track.
S204: and aiming at each training process, in the training process, at least one kind of information is selected from the track of each obstacle before the appointed time and the information of each obstacle corresponding to each information type in the appointed time length, and the selected information is input into the prediction model to be trained to obtain the prediction track to be optimized output by the prediction model to be trained.
S206: and training the prediction model to be trained according to the sample trajectory and the prediction trajectory to be optimized obtained in each training process.
Specifically, in this specification, the unmanned aerial vehicle may acquire a trajectory of each obstacle in a past specified period of time from a time t-n to a time t-n + m and information corresponding to each information type of each obstacle, which is preset, and information corresponding to each information type of each obstacle in the past specified period of time, for example, a current time t, the unmanned aerial vehicle may acquire a trajectory of each obstacle and information corresponding to each information type of each obstacle in a time t-n to a time t-n + m. In addition, as described above, the information type may include a track information type, a weather information type, a map information type, a traffic signal information type, a pedestrian information type, and the like. Of course, other information types may also be included in the present specification, and are not described herein again. In addition, the unmanned device can also acquire tracks in different past time periods aiming at different obstacles, and in the process of training the prediction model, when information of the model is input every time, the historical track of a single obstacle and the information of the obstacle corresponding to a plurality of selected information types can be input into the prediction model to be trained, so that the prediction model to be trained obtains the predicted track of the obstacle.
After acquiring the historical track of the obstacle and the information of the obstacle corresponding to each information type, determining the specified time, using the acquired information before the specified time as the input information of the prediction model to be trained, and using the acquired track after the specified time as the sample track. Along the above example, in the time from t-n to t-n + m, the time t-k is selected as the designated time, the historical track of the obstacle and the information of the obstacle corresponding to each information type from the time t-n to the time t-k can be used as the input information, and the historical track of the obstacle from the time t-k +1 to the time t-n + m can be used as the sample track. When the input information is determined, at least one type of information can be selected from the historical track of the obstacle from the time t-n to the time t-k and the information of the obstacle corresponding to each information type, and the selected information is input into the prediction model to be trained.
In this specification, for each training process of a prediction model to be trained, in the training process, an unmanned device may select at least one type of information from among the information as input information, the selectable information includes a trajectory of each obstacle before a specified time and information of each obstacle corresponding to each information type before the specified time, where the selection of the input information in each training process does not affect each other, the manner of selecting the input information in each training process may include random selection, selection according to a rule, and the like, the rule may include that a prediction duration is positively correlated with the number of selected information types, each information type corresponding to a driving scene, and the like, and specific contents may refer to the above contents. Or the information selected by the current training process is the information of the information type which is not selected in the last training process. For example, in the previous training process, the selected input information may be the historical trajectory of each obstacle, the information that the obstacle corresponds to the weather information type, or the like, and in the training process, the selected input information may be the historical trajectory of each obstacle and the information that the obstacle corresponds to the weather information type, or the historical trajectory of each obstacle and the information that the obstacle corresponds to the traffic light information type, or the like.
After the input information is selected, the selected input information can be input into the prediction model to be trained, and the prediction track to be optimized output by the prediction model to be trained is obtained. And the prediction track to be optimized is predicted by the prediction model to be trained based on the selected input information.
In order to enable the prediction model to be trained to obtain a more accurate prediction track to be optimized, loss can be determined according to the sample track and the prediction track to be optimized, and the prediction model to be trained is trained with minimized loss. Wherein, the difference between the sample track and the predicted track to be optimized can be determined as a loss, for example, the cross entropy of the sample track and the predicted track to be optimized is determined. Then, parameters of the prediction model to be trained can be optimized according to gradient descent and other modes, and when the preset training times or the loss value is smaller than a preset loss threshold value, the unmanned equipment can complete training of the prediction model.
In the description, the server may also train the prediction model, refer to the content of the unmanned equipment training prediction model, and this description is not repeated.
The method for predicting the obstacle trajectory provided by the present specification is particularly applicable to the field of delivery using an unmanned aerial vehicle, for example, a delivery scene such as express delivery and takeout using an unmanned aerial vehicle. Specifically, in the above-described scenario, delivery may be performed using an unmanned vehicle fleet configured with a plurality of unmanned devices.
Based on the method for predicting the obstacle trajectory shown in fig. 1, the embodiment of the present specification further provides a schematic structural diagram of an apparatus for predicting the obstacle trajectory, as shown in fig. 3.
Fig. 3 is a schematic structural diagram of an apparatus for predicting an obstacle trajectory according to an embodiment of the present disclosure, where the apparatus includes:
a duration determining module 301, configured to determine a driving scene where the unmanned aerial vehicle is currently located, and determine a predicted duration of a predicted trajectory of each obstacle required when planning a trajectory of the unmanned aerial vehicle;
a selecting module 302, configured to select at least one information type from a plurality of preset information types according to the driving scenario and/or the predicted duration;
a feature determining module 303, configured to obtain, for each selected information type, information of the information type of each obstacle, and determine, according to the obtained information of the information type of each obstacle, a feature of each obstacle corresponding to the information type;
and the prediction module 304 is used for determining the predicted track of each obstacle within the predicted time length according to the current position of the obstacle and the determined characteristics of each obstacle corresponding to each selected information type.
The specification provides a method for predicting a track of an obstacle, which considers factors such as a demand for the predicted track of each obstacle in a surrounding environment when an unmanned device plans a track of the unmanned device, a driving scene where the unmanned device is located during operation, and the like, selects an information type from the information types, acquires information based on the selected information type, and predicts the track of the obstacle according to the acquired information. The method and the device not only ensure the requirement of real-time performance of the predicted track of the obstacle when the unmanned equipment runs, but also fully utilize computing resources to obtain the accurate predicted track of the obstacle on the basis.
Optionally, the selecting module 302 is specifically configured to obtain a predetermined correspondence between each driving scene and each information type; and selecting at least one information type from the information types according to the corresponding relation and the driving scene.
Optionally, the predicted duration is positively correlated with the number of selected information types.
Optionally, the selected information type includes a track information type;
the characteristic determining module 303 is specifically configured to obtain a historical track of each obstacle within a specified time period in the past; determining the space interactive characteristics of each obstacle at any moment in the past specified duration according to the position information of each obstacle at the moment, wherein the space interactive characteristics represent the interactive information of each obstacle at the moment; and determining the space-time interaction characteristics of each barrier at the current moment according to the space interaction characteristics of each barrier at each moment in the past specified duration, wherein the space-time interaction characteristics represent the interaction information of each barrier in the past specified duration.
Optionally, the selected information type comprises a pedestrian information type;
the characteristic determining module 303 is specifically configured to determine a number of pedestrian-type obstacles in the environment around the unmanned aerial vehicle; acquiring action information of obstacles of various pedestrian types; for each obstacle of the pedestrian type, determining the action characteristics of the obstacle of the pedestrian type according to the action information of the obstacle of the pedestrian type; and determining the global action characteristics of the obstacles of the pedestrian types according to the action characteristics of the obstacles of the pedestrian types, wherein the global action characteristics are used for representing the interactive information of the obstacles of the pedestrian types.
Optionally, the prediction module 204 is specifically configured to, according to the determined feature that each obstacle corresponds to each selected information type, process the feature that each obstacle corresponds to each selected information type, so as to obtain a guidance feature; and determining the predicted track of the obstacle within the predicted time length according to the current position of the obstacle and the guide characteristics.
Optionally, the prediction module 304 is specifically configured to determine an individual feature of the obstacle according to the current position of the obstacle and the guidance feature; and inputting the individual characteristics, the guide characteristics and the current position of the obstacle into a pre-trained prediction model to obtain the predicted track of the obstacle determined by the prediction model.
Based on the method for model training shown in fig. 2, an embodiment of the present specification further provides a schematic structural diagram of an apparatus for model training, as shown in fig. 4.
Fig. 4 is a schematic structural diagram of an apparatus for model training provided in an embodiment of the present disclosure, where the apparatus includes:
an obtaining module 401, configured to obtain, for each obstacle, a trajectory of the obstacle within a historical specified duration, and information of each preset information type of the obstacle within the specified duration;
a time determining module 402, configured to determine a specified time, and use an acquired trajectory of the obstacle after the specified time as a sample trajectory;
an input module 403, configured to select, for each training process, at least one type of information from among the trajectories of the obstacles before the specified time and information of the obstacles corresponding to each information type within a specified time duration, and input the selected information into a prediction model to be trained to obtain a prediction trajectory to be optimized, where the prediction trajectory is output by the prediction model to be trained;
and the training module 404 is configured to train the prediction model to be trained according to the sample trajectory and the prediction trajectory to be optimized obtained in each training process.
The present specification also provides a computer readable storage medium, which stores a computer program, and the computer program can be used to execute the method for predicting obstacle trajectory and training model provided in the above.
Based on the method for predicting the obstacle trajectory and training the model provided in the foregoing, the embodiment of the present specification further provides a schematic structure diagram of the electronic device shown in fig. 5. As shown in fig. 5, at the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, but may also include hardware required for other services. The processor reads a corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to realize the method for predicting the obstacle trajectory and training the model provided by the above contents.
Of course, besides the software implementation, the present specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (10)

1. A method of predicting an obstacle trajectory, the method comprising:
determining a driving scene where the unmanned equipment is located currently, and determining the prediction duration of the prediction track of each obstacle required when planning the track of the unmanned equipment;
selecting at least one information type from a plurality of preset information types according to the driving scene and/or the predicted duration;
acquiring the information of the information type of each obstacle aiming at each selected information type, and determining the characteristic of each obstacle corresponding to the information type according to the acquired information of the information type of each obstacle;
and for each obstacle, determining a predicted track of the obstacle within the predicted time length according to the current position of the obstacle and the determined characteristics of each obstacle corresponding to each selected information type.
2. The method according to claim 1, wherein selecting at least one information type among a plurality of preset information types according to the driving scenario specifically comprises:
acquiring a corresponding relation between each driving scene and each information type which are determined in advance;
and selecting at least one information type from the information types according to the corresponding relation and the driving scene.
3. The method of claim 1, wherein the predicted duration is positively correlated with a number of selected information types.
4. The method of claim 1, wherein the selected information type comprises a track information type;
acquiring the information of the information type of each obstacle specifically includes:
acquiring historical tracks of all obstacles in a past specified duration;
determining the feature of each obstacle corresponding to the information type according to the acquired information of the information type of each obstacle, specifically comprising:
determining the space interactive characteristics of each obstacle at any moment in the past specified duration according to the position information of each obstacle at the moment, wherein the space interactive characteristics represent the interactive information of each obstacle at the moment;
and determining the space-time interaction characteristics of each barrier at the current moment according to the space interaction characteristics of each barrier at each moment in the past specified duration, wherein the space-time interaction characteristics represent the interaction information of each barrier in the past specified duration.
5. The method of claim 1, wherein the selected information type comprises a pedestrian information type;
acquiring the information of the information type of each obstacle specifically includes:
determining a number of pedestrian-type obstacles in an environment surrounding the drone;
acquiring action information of obstacles of various pedestrian types;
determining the feature of each obstacle corresponding to the information type according to the acquired information of the information type of each obstacle, specifically comprising:
for each obstacle of the pedestrian type, determining the action characteristics of the obstacle of the pedestrian type according to the action information of the obstacle of the pedestrian type;
and determining the global action characteristics of the obstacles of the pedestrian types according to the action characteristics of the obstacles of the pedestrian types, wherein the global action characteristics are used for representing the interactive information of the obstacles of the pedestrian types.
6. The method of claim 1, wherein determining the predicted trajectory of the obstacle within the predicted duration based on the current location of the obstacle and the determined characteristics of each obstacle corresponding to the selected information types comprises:
processing the characteristics of each obstacle corresponding to each selected information type according to the characteristics of each determined obstacle corresponding to each selected information type to obtain guide characteristics;
and determining the predicted track of the obstacle within the predicted time length according to the current position of the obstacle and the guide characteristics.
7. The method of claim 6, wherein determining the predicted trajectory of the obstacle within the predicted duration based on the current position of the obstacle and the guidance features comprises:
determining the individual characteristics of the obstacle according to the current position of the obstacle and the guide characteristics;
and inputting the individual characteristics, the guide characteristics and the current position of the obstacle into a pre-trained prediction model to obtain the predicted track of the obstacle determined by the prediction model.
8. An apparatus for predicting an obstacle trajectory, the apparatus comprising:
the system comprises a duration determining module, a track calculating module and a track planning module, wherein the duration determining module is used for determining a driving scene where the unmanned equipment is located currently and determining the prediction duration of the prediction track of each obstacle required when the track of the unmanned equipment is planned;
the selection module is used for selecting at least one information type from a plurality of preset information types according to the driving scene and/or the predicted duration;
the characteristic determining module is used for acquiring the information of the information type of each obstacle aiming at each selected information type, and determining the characteristic of each obstacle corresponding to the information type according to the acquired information of the information type of each obstacle;
and the prediction module is used for determining the predicted track of each obstacle within the prediction duration according to the current position of the obstacle and the determined characteristics of each obstacle corresponding to each selected information type.
9. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-7 when executing the program.
CN202011087583.8A 2020-10-13 2020-10-13 Method and device for predicting obstacle trajectory and training model Active CN111912423B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011087583.8A CN111912423B (en) 2020-10-13 2020-10-13 Method and device for predicting obstacle trajectory and training model
CN202011454164.3A CN112629550B (en) 2020-10-13 2020-10-13 Method and device for predicting obstacle track and model training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011087583.8A CN111912423B (en) 2020-10-13 2020-10-13 Method and device for predicting obstacle trajectory and training model

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202011454164.3A Division CN112629550B (en) 2020-10-13 2020-10-13 Method and device for predicting obstacle track and model training

Publications (2)

Publication Number Publication Date
CN111912423A CN111912423A (en) 2020-11-10
CN111912423B true CN111912423B (en) 2021-02-02

Family

ID=73265217

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011454164.3A Active CN112629550B (en) 2020-10-13 2020-10-13 Method and device for predicting obstacle track and model training
CN202011087583.8A Active CN111912423B (en) 2020-10-13 2020-10-13 Method and device for predicting obstacle trajectory and training model

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202011454164.3A Active CN112629550B (en) 2020-10-13 2020-10-13 Method and device for predicting obstacle track and model training

Country Status (1)

Country Link
CN (2) CN112629550B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112270306B (en) * 2020-11-17 2022-09-30 中国人民解放军军事科学院国防科技创新研究院 Unmanned vehicle track prediction and navigation method based on topological road network
CN113325855B (en) * 2021-08-02 2021-11-30 北京三快在线科技有限公司 Model training method for predicting obstacle trajectory based on migration scene
CN113753077A (en) * 2021-08-17 2021-12-07 北京百度网讯科技有限公司 Method and device for predicting movement locus of obstacle and automatic driving vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107426703A (en) * 2017-08-24 2017-12-01 北京邮电大学 It is a kind of in outdoor crowded mobility Forecasting Methodology of the place based on fuzzy clustering
CN108564118A (en) * 2018-03-30 2018-09-21 陕西师范大学 Crowd scene pedestrian track prediction technique based on social affinity shot and long term memory network model
JP2019124982A (en) * 2018-01-11 2019-07-25 愛知機械テクノシステム株式会社 Unmanned carrier
CN111497864A (en) * 2019-01-31 2020-08-07 斯特拉德视觉公司 Method and device for transmitting current driving intention signal to person by using V2X application program
CN111731283A (en) * 2020-05-26 2020-10-02 北京百度网讯科技有限公司 Vehicle collision risk identification method and device and electronic equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104239556B (en) * 2014-09-25 2017-07-28 西安理工大学 Adaptive trajectory predictions method based on Density Clustering
US10019006B2 (en) * 2015-04-08 2018-07-10 University Of Maryland, College Park Surface vehicle trajectory planning systems, devices, and methods
US10551842B2 (en) * 2017-06-19 2020-02-04 Hitachi, Ltd. Real-time vehicle state trajectory prediction for vehicle energy management and autonomous drive
CN110989636B (en) * 2020-02-26 2020-08-07 北京三快在线科技有限公司 Method and device for predicting track of obstacle
CN111079721B (en) * 2020-03-23 2020-07-03 北京三快在线科技有限公司 Method and device for predicting track of obstacle
CN111126362B (en) * 2020-03-26 2020-08-07 北京三快在线科技有限公司 Method and device for predicting obstacle track
CN111114543B (en) * 2020-03-26 2020-07-03 北京三快在线科技有限公司 Trajectory prediction method and device
CN111238523B (en) * 2020-04-23 2020-08-07 北京三快在线科技有限公司 Method and device for predicting motion trail

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107426703A (en) * 2017-08-24 2017-12-01 北京邮电大学 It is a kind of in outdoor crowded mobility Forecasting Methodology of the place based on fuzzy clustering
JP2019124982A (en) * 2018-01-11 2019-07-25 愛知機械テクノシステム株式会社 Unmanned carrier
CN108564118A (en) * 2018-03-30 2018-09-21 陕西师范大学 Crowd scene pedestrian track prediction technique based on social affinity shot and long term memory network model
CN111497864A (en) * 2019-01-31 2020-08-07 斯特拉德视觉公司 Method and device for transmitting current driving intention signal to person by using V2X application program
CN111731283A (en) * 2020-05-26 2020-10-02 北京百度网讯科技有限公司 Vehicle collision risk identification method and device and electronic equipment

Also Published As

Publication number Publication date
CN112629550A (en) 2021-04-09
CN112629550B (en) 2024-03-01
CN111912423A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN111190427B (en) Method and device for planning track
CN111912423B (en) Method and device for predicting obstacle trajectory and training model
CN111208838B (en) Control method and device of unmanned equipment
CN112766468B (en) Trajectory prediction method and device, storage medium and electronic equipment
CN112015847B (en) Obstacle trajectory prediction method and device, storage medium and electronic equipment
CN110262486B (en) Unmanned equipment motion control method and device
CN111238523B (en) Method and device for predicting motion trail
CN113296541B (en) Future collision risk based unmanned equipment control method and device
CN112309233B (en) Road boundary determining and road segmenting method and device
CN110942181A (en) Method and device for predicting obstacle track
CN111062372B (en) Method and device for predicting obstacle track
CN112677993A (en) Model training method and device
CN113110526B (en) Model training method, unmanned equipment control method and device
CN112327864A (en) Control method and control device of unmanned equipment
CN111126362A (en) Method and device for predicting obstacle track
CN112883871B (en) Model training and unmanned vehicle motion strategy determining method and device
CN113968243A (en) Obstacle trajectory prediction method, device, equipment and storage medium
CN110895406B (en) Method and device for testing unmanned equipment based on interferent track planning
CN112818968A (en) Target object classification method and device
CN114194213A (en) Target object trajectory prediction method and device, storage medium and electronic equipment
CN112987754B (en) Unmanned equipment control method and device, storage medium and electronic equipment
CN113325855B (en) Model training method for predicting obstacle trajectory based on migration scene
CN114280960A (en) Automatic driving simulation method and device, storage medium and electronic equipment
CN114114954A (en) Control method and device for unmanned equipment
CN114153207A (en) Control method and control device of unmanned equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant