CN111316286A - Trajectory prediction method and device, storage medium, driving system and vehicle - Google Patents

Trajectory prediction method and device, storage medium, driving system and vehicle Download PDF

Info

Publication number
CN111316286A
CN111316286A CN201980005403.6A CN201980005403A CN111316286A CN 111316286 A CN111316286 A CN 111316286A CN 201980005403 A CN201980005403 A CN 201980005403A CN 111316286 A CN111316286 A CN 111316286A
Authority
CN
China
Prior art keywords
global
track
data
semantic
predicted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980005403.6A
Other languages
Chinese (zh)
Other versions
CN111316286B (en
Inventor
崔健
陈晓智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhuoyu Technology Co ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN111316286A publication Critical patent/CN111316286A/en
Application granted granted Critical
Publication of CN111316286B publication Critical patent/CN111316286B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provided are a trajectory prediction method and device, a storage medium, a driving system and a vehicle. The method comprises the steps of obtaining global semantic data and global track data of an area where an object to be predicted is located (S202), then fusing the global semantic data and the global track data to obtain global fusion data (S204), extracting features in the global fusion data to obtain global features (S206), and further processing the global features by using a trained track prediction model to obtain a target track of the object to be predicted (S208). The method can realize the track prediction of the moving object by combining the global data, has higher prediction accuracy and reduces the occurrence probability of accidents to a certain extent.

Description

Trajectory prediction method and device, storage medium, driving system and vehicle
Technical Field
The invention relates to the technical field of intelligent transportation, in particular to a track prediction method and device, a storage medium, a driving system and a vehicle.
Background
With the development of the intelligent transportation field, the prediction algorithm of the motion trail of the motion object has great significance in the path planning field. By predicting the motion trail of the moving object, the path planning can be performed under the condition that the possible future motion trail of the moving object is known, and the method is favorable for preventing accidents such as collision.
The current trajectory prediction algorithm generally determines a motion model applicable to a motion object according to a category to which the motion object belongs on the basis of motion data of the motion object, processes the motion data of the motion object by using the motion model, and then integrates regional semantic information in a post-processing manner to predict the motion trajectory of the object to be predicted.
The existing track prediction algorithm is based on the motion data of the moving object, and cannot predict tracks from the global position, which easily causes the predicted tracks of different moving objects to be crossed, further causes accidents such as collision and the like in path planning or scheduling based on the predicted tracks, and has great potential safety hazard.
Disclosure of Invention
The embodiment of the invention provides a track prediction method and device, a storage medium, a driving system and a vehicle, which can realize track prediction of a moving object by combining global data, have higher prediction accuracy and reduce the occurrence probability of accidents to a certain extent.
In a first aspect, an embodiment of the present invention provides a trajectory prediction method, including:
acquiring global semantic data and global track data of an area where an object to be predicted is located;
fusing the global semantic data and the global track data to obtain global fusion data;
extracting features in the global fusion data to obtain global features;
and processing the global characteristics by using a trained track prediction model to obtain the target track of the object to be predicted.
In a second aspect, an embodiment of the present invention provides a trajectory prediction apparatus, including:
the acquisition module is used for acquiring global semantic data and global track data of an area where an object to be predicted is located;
the fusion module is used for fusing the global semantic data and the global track data to obtain global fusion data;
the feature extraction module is used for extracting features in the global fusion data to obtain global features;
and the prediction module is used for processing the global characteristics by utilizing a trained track prediction model to obtain the target track of the object to be predicted.
In a third aspect, an embodiment of the present invention provides a trajectory prediction apparatus, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored,
the computer program is executed by a processor to implement the method as described in the first aspect.
In a fifth aspect, an embodiment of the present invention provides a driving system, including:
trajectory prediction means for performing the method according to the first aspect;
and the motion controller is used for controlling the controlled object to move according to the target track.
In one possible design, the controlled object and the object to be predicted are different objects.
In a sixth aspect, an embodiment of the present invention provides a vehicle, including:
a trajectory prediction device as defined in the second or third aspect, configured to perform the method as defined in the first aspect.
In a seventh aspect, an embodiment of the present invention provides a vehicle, including:
the driving system according to the fifth aspect.
In an eighth aspect, an embodiment of the present invention provides a control device for an unmanned aerial vehicle, including:
the driving system according to the fifth aspect.
In the technical scheme provided by the embodiment of the invention, the global feature of the area can be obtained by acquiring and processing the global semantic data and the global track data of the area where the object to be predicted is located, and then the global feature is processed by utilizing a trained track prediction model, so that the target track of the object to be predicted can be obtained, in other words, the technical scheme provided by the embodiment of the invention starts from the global semantic and the global track, when the track of the object to be predicted is predicted, all moving objects in the area are considered, and the global semantic data of the area is combined to realize the track prediction of any moving object in the area, compared with a prediction method only considering a single object to be predicted, the scheme has higher prediction accuracy, and can reduce the occurrence probability of accidents to a certain extent when subsequent path planning or scheduling is executed according to the scheme, has higher safety.
Drawings
Fig. 1 is a schematic top view of a trajectory prediction scenario according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a trajectory prediction method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating another trajectory prediction method according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating another trajectory prediction method according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a recurrent unit of a recurrent neural network model according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a model architecture of a recurrent neural network model according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a model architecture of a long term short term memory network model according to an embodiment of the present invention;
FIG. 8 is a flowchart illustrating another trajectory prediction method according to an embodiment of the present invention;
FIG. 9 is a functional block diagram of a trajectory prediction device according to an embodiment of the present invention;
fig. 10 is a schematic physical structure diagram of a trajectory prediction apparatus according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a driving system according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of a vehicle according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of another vehicle according to an embodiment of the present invention.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terms to which the present invention relates will be explained first:
a moving object: refers to a living being or an object capable of performing a trajectory movement. The moving object according to the embodiment of the present invention may include, but is not limited to: at least one of a vehicle, an animal, a human, a robot, and an unmanned aerial vehicle. The Vehicle may be an Unmanned Vehicle, such as an Unmanned Ground Vehicle (UGV), or a private Vehicle or a public transport Vehicle in an automatic driving mode.
The object to be predicted: one or more moving objects of the target trajectory to be predicted. Wherein, a plurality means two or more, and the concept is referred to in the following, and is not described in detail.
Semantic object: refers to the objects on each semantic concept in the region. Referring to the scenario shown in fig. 1, semantic objects in the scenario include: vehicle, lane line, lane. It can be seen that fig. 1 is only used for example, in an actual trajectory prediction scenario, semantic categories of semantic objects further include multiple categories, such as: the semantic object semantic classification method based on the tree semantic object semantic classification comprises the steps of trees, obstacles, railings, signs, people, animals and the like.
Long Short Term Memory network (LSTM) model: the LSTM is a variant model of a Recurrent Neural Network (RNN), and has longer time-dependent modeling capability compared with the RNN model.
The specific application scenarios of the technical scheme provided by the embodiment of the invention are as follows: the scene is predicted for the trajectory of the moving object.
Furthermore, the technical scheme provided by the embodiment of the invention can also be specifically applied to a path planning scene, and at the moment, the path planning of one or more moving objects can be realized according to the predicted track.
In addition, the technical scheme provided by the embodiment of the invention can also be specifically applied to a vehicle scheduling scene. Scheduling of schedulable vehicles is achieved, for example, by trajectory prediction of other non-dispatchable vehicles or objects.
As described in the background art, the conventional trajectory prediction method only targets at a single moving object, and after determining the category of the moving object, the motion data of the moving object itself is processed through a motion model corresponding to the category, so as to predict the motion trajectory of the moving object. On one hand, the prediction mode is limited by the object type, and the accurate prediction result can be obtained by the motion model corresponding to the type of the motion object only by accurately judging the type of the motion object; on the other hand, the prediction method only depends on the motion data of the moving object itself, and does not perform comprehensive analysis from the global perspective in combination with the current motion environment and the motion situation of other moving objects, and the other moving or non-moving objects in the area where the moving object is not considered are likely to predict the intersection of the trajectories of two moving objects of the same category, so that if path planning or object scheduling is performed based on the intersection, accidents such as collision are likely to occur, and a great safety risk exists.
Based on this, the technical solution provided by the embodiment of the present invention aims to solve the above technical problems in the prior art, and proposes the following solution ideas: and comprehensively considering global data of the area where the object to be predicted is located, wherein the global data comprises global semantic data and global track data, obtaining global characteristics according to the global characteristics, and taking the global characteristics as the input of a track prediction model to obtain the target track of the object to be predicted.
Based on such design, the trajectory prediction method provided by the embodiment of the present invention may be specifically executed in a built-in processor of a certain moving object or a terminal device held by the moving object, or may also be specifically executed in a cloud or a background server.
For example. In one possible scenario, a first processor of the autonomous vehicle may self-plan a route of travel, while a second processor of the autonomous vehicle is configured to perform a trajectory prediction method provided by the present solution and to input the predicted trajectory into the first built-in processor, such that the first processor may perform subsequent path planning based on the predicted trajectory. The first processor and the second processor may be the same processor, or may be different processors, for example, one or two processors in an Advanced Driving Assistance System (ADAS); and the first processor and the second processor can be one part of a master controller of the vehicle, or can also be a background master server or a cloud server for controlling the unmanned vehicle to run.
The following describes the technical solutions of the present invention and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Example one
The embodiment of the invention provides a track prediction method. Referring to fig. 2 to fig. 4, fig. 2 is a schematic flow chart illustrating a trajectory prediction method according to an embodiment of the present invention, fig. 3 is a schematic flow chart illustrating an implementation of the trajectory prediction method according to an embodiment of the present invention in a specific application scenario, and fig. 4 is a specific implementation of the flow chart illustrated in fig. 3.
As shown in fig. 2, the method comprises the steps of:
s202, acquiring global semantic data and global track data of the area where the object to be predicted is located.
As mentioned above, the object to be predicted is a moving object, which may include but is not limited to at least one of the following: vehicles, animals, humans, robots and unmanned aerial vehicles. In addition, the number of objects to be predicted in the embodiment of the present invention is not particularly limited, and may be one or more.
The global semantic data is used for describing semantic categories of objects in the area where the object to be predicted is located, and at the moment, each moving object or non-moving object is taken as a semantic object and has respective semantic category. The global trajectory data is used to describe historical motion coordinates of each moving object in the region. It will be appreciated that, in particular implementations, the global trajectory data may include historical motion coordinates for at least one frame.
Fig. 3 is a schematic diagram illustrating an implementation flow of the present solution when the scenario illustrated in fig. 1 is taken as an example. As shown in fig. 3, the driving scene includes 3 semantics, which are respectively: when vehicles, lanes and lane lines exist, at the moment, global semantic data which needs to be acquired in the scene, namely semantic data of each semantic object; and, global trajectory data of the scene, that is, each moving object: trajectory data of the vehicle 1 (as an object to be predicted), the vehicle 2, and the vehicle 3.
It should be noted that fig. 3 is only an illustration, and in a specific implementation scenario, the representation forms of the global trajectory data and the global semantic data are not limited to a single obtaining manner, and may be obtained as overall data.
In one implementation scenario, as shown in fig. 4, the trajectory data of each moving object may be implemented by an LSTM model, as described in detail below.
And S204, fusing the global semantic data and the global track data to obtain global fusion data.
Specifically, as shown in fig. 3, the fusion module is configured to fuse the global semantic data corresponding to each frame with the global track data to obtain global fusion data of each frame.
And S206, extracting the features in the global fusion data to obtain global features.
As shown in fig. 3, the feature extraction module is configured to extract global features from the global fusion data. In one implementation scenario, as shown in fig. 4, this step may be implemented by a Convolutional Neural Network (CNN) model, which is described in detail later.
And S208, processing the global features by using the trained track prediction model to obtain the target track of the object to be predicted.
The invention adopts a trained track prediction model to predict the motion track of a moving object, and at the moment, the input of the track prediction model is global characteristics and the output is the motion track of the moving object.
Specifically, the trajectory prediction model provided by the embodiment of the present invention may include, but is not limited to: at least one of: LSTM model, Multi-Layer perceptron (MLP); wherein the multilayer perceptron comprises: RNN model, gated round-robin (GRU) model. For example, in the implementation scenario shown in fig. 4, namely, the trajectory prediction is implemented by the LSTM model, in this case, Y ═ LSTM (feature), where Y denotes the target trajectory and feature denotes the global feature.
Referring to FIGS. 5-7, FIG. 5 shows the design logic of a loop element in the RNN model, FIG. 6 shows a model architecture diagram of the RNN model, and FIG. 7 shows a model architecture diagram of the LSTM model.
Compared with a common neural network, as shown in fig. 5, the RNN model is mainly distinguished in that an output or an intermediate state of a previous frame is used as an input of a current frame to implement fusion of a history message and a timing relationship. After time-unwrapping the loop element as shown in FIG. 5, the RNN model as shown in FIG. 6 is obtained. As shown in fig. 6, the RNN model may implement timing modeling. Thus, the trajectory prediction step in the present solution can be implemented by the RNN model.
As shown in fig. 7, each repetitive block in the LSTM model includes 4 interaction layers, and the 4 interaction layers interact in a special way, so that the output or intermediate state of the previous frame is used as the input of the current frame, and thus, compared with the RNN model shown in fig. 6, the LSTM model has more excellent time-dependent modeling capability. In addition, since the prediction of the trajectory has a strong temporal correlation with the historical data of the moving object, the trajectory prediction result closer to the actual development can be obtained by using the LSTM model to realize the trajectory prediction.
Through the design, the technical scheme provided by the embodiment of the invention can realize the track prediction of any object to be predicted from the global data, has higher prediction accuracy compared with a track prediction mode only starting from the motion data of the object to be predicted, and can reduce the occurrence probability of accidents to a certain extent and have higher safety when subsequent path planning or scheduling is executed on the basis of the prediction accuracy.
The following describes a specific implementation of the method shown in fig. 2.
S202 includes two-way global data acquisition: global semantic data and global trajectory data. These two global data acquisition manners can refer to the flow shown in fig. 8.
On one hand, as shown in fig. 8, the manner of obtaining global semantic data may include the following steps:
s202-12, acquiring a global area image of the area where the object to be predicted is located.
The global area image can be an image acquired in real time, the image acquired in real time has higher timeliness, and the obtained global semantic data is more accurate. Specifically, the real-time acquisition mode may include, but is not limited to: and acquiring images in real time through image acquisition equipment. Wherein the image capturing apparatus may be a part of the trajectory prediction device (an execution subject of the trajectory prediction method); alternatively, the trajectory prediction device may have real-time data interaction. For example, if the trajectory prediction device may be a processor a in a main controller of the vehicle, the image capture device may be a camera in a black box of the vehicle, which may directly input the captured image into the processor a; or, the image capturing device may be a camera installed in an area, such as a camera installed on a road or on a roadside, and at this time, the processor a may request and receive the global area image from the camera in the area or a background server of the camera in the area in real time.
Besides the real-time acquisition mode, the global semantic data can be acquired by calling the acquired data. Specifically, in an implementation scenario, a global area image of the area in a high-precision map may be acquired; in another implementation scenario, a global region image of the region that has been acquired in another processor or memory may be obtained. Moreover, the implementation method can only acquire the environmental information of the area, such as non-real-time data of non-moving objects such as roads, signs, lane lines and the like, but cannot acquire the motion situation of the moving object in a real-time scene, so that when the scheme is implemented in the manner, the method is only suitable for predicting the track of a single object to be predicted, cannot realize comprehensive prediction by combining the tracks of other moving objects, and is relatively low in prediction accuracy.
It should be noted that, in the embodiment of the present invention, the image to be subsequently processed is an overhead view image, and therefore, if the acquired image is not an overhead view image, it is necessary to perform overhead projection on the acquired image to obtain an overhead view image that meets the requirement of the subsequent processing.
In one possible design, the top view image according to the embodiment of the present invention may be embodied as: digital ortho picture (DOM) images.
In addition, for convenience of processing, the shape or size of the area of the "area where the object to be predicted is located" may be further set individually, which is not particularly limited in the embodiment of the present invention. Specifically, a rectangular region having a certain length and width in a top view may be obtained as the region where the object to be predicted is located, for example, an image of a rectangular region having a length W and a width H as shown in fig. 1 (or fig. 3) may be obtained. For another example, the whole road where the object to be predicted is located may be used as the area where the object to be predicted is located.
S202-14, performing semantic recognition on each pixel in the global area image respectively to obtain the semantic category of each pixel.
In one possible design, semantic recognition of each pixel may be achieved through deep learning. If the method is implemented, the pixel semantic recognition model needs to be deeply learned by using preset pixel sample data before the step is executed, so as to obtain the pixel semantic recognition model meeting the application requirement (which can be implemented by defining a loss function). Thus, when the step is executed, the global area image is only required to be input into the pixel semantic recognition model, and the output of the pixel semantic recognition model is the semantic category of each pixel.
In another possible design, the pixel value of each pixel may be used as a basis, and the pixel value of each pixel is compared with the pixel interval corresponding to each semantic category, so that, for any pixel value, the semantic category corresponding to the pixel interval in which the pixel value falls is used as the semantic category corresponding to the pixel. The corresponding relation between each semantic category and the pixel interval can be preset in a user-defined mode.
S202-16, performing semantic annotation on the global region image according to the semantic category of each pixel to obtain the global semantic information.
Because the semantic category of each pixel is determined, when the step is executed, the global region image can be subjected to semantic segmentation according to the semantic category of each pixel to obtain a plurality of semantic objects; therefore, semantic labeling is respectively carried out on each semantic object to obtain the global semantic information.
The semantic annotation is only used for distinguishing semantic categories of the objects, and can be annotated in any distinguishable mode. For example, semantic objects may be identified by different colors. Alternatively, as shown in FIG. 1 (or FIG. 3), semantic objects may be identified by different shading. Therefore, after identification, the semantic objects with the same identification are the same type of semantic objects.
In this case, in a different implementation scenario, the labeling method of another moving object of the same category as the object to be predicted belongs to may be the same as or different from the labeling method of the object to be predicted. As shown in fig. 1, when the object to be predicted is a vehicle 1, the area where the object to be predicted is located further includes moving objects of the same category: vehicle 2 and vehicle 3. At this time, if step S208 is implemented by the first trajectory prediction model (the first trajectory prediction model is used for predicting the target trajectory of the object to be predicted, and the implementation manner is described in detail later), as shown in fig. 1, it is necessary to distinguish the vehicle 1 from the vehicle 2 and the vehicle 3, the vehicle 1 has one kind of mark, and the vehicle 2 and the vehicle 3 are another kind of mark. Alternatively, if step S208 is implemented by a second trajectory prediction model (the second trajectory prediction model is used to predict the motion trajectories of all the moving objects in the area, and the implementation manner is described in detail later), the moving objects of the same type do not need to be identified separately, and the vehicles 1, 2, and 3 may be identified by the same identification manner (the same identification manner is not shown in fig. 1).
And when semantic annotation is executed, grids can be further divided in a self-defined mode, each grid can comprise one or more pixel points, and the division mode can be realized through preset resolution. For example, the global area image shown in fig. 1 may be divided into square grids with a length of 20cm, so that only the grids need to be labeled when performing subsequent semantic labeling. When the grid comprises a plurality of pixel points, the implementation mode of grid division is favorable for reducing the marking amount, and the processing efficiency is improved.
In addition to the foregoing implementation, the steps S202-14 and S202-16 described in fig. 8 can also be implemented by a neural network model. That is, before executing S202-14, a semantic recognition model is trained, so that the global region image obtained in S202-12 is input into the semantic recognition model, and the output of the semantic recognition model is the global region image labeled with the semantic category, so as to obtain global semantic data.
The types of the semantic identification model and the pixel semantic identification model are not particularly limited, the CNN model or other neural network models can be adopted, and the sample data of the CNN model and the neural network models need to be labeled and designed according to the input and the output of the models, so that the description is omitted.
On the other hand, as shown in fig. 8, the manner of acquiring global track data may include the following steps:
s202-22, acquiring a track point set of each moving object in the area where the object to be predicted is located, wherein the track point set is formed by collecting coordinate points of the moving objects according to a time sequence.
The step is used for acquiring a track point set of each moving object in the current area, wherein each track point set is composed of a plurality of coordinate points of the moving object, and for convenience of processing, the coordinate points of each moving object can be converted into coordinate points in the same coordinate system. For example, coordinate points in each set of trajectory points may be converted into coordinate points in a rectangular coordinate system formed by two rectangular sides of a rectangular area shown in fig. 1, each coordinate point being expressed in the form of (X, Y), and a set of trajectory points for each moving object may be expressed in the form { (X, Y)i,Yi)}Where i is used to represent the time-sequential order of the coordinate points.
Specifically, in specific implementation, the aforementioned coordinate point set may be configured by acquiring coordinate points of each moving object in a time interval having the current time as an end point. The length of the time interval may be preset as required, for example, a set of trajectory points of each moving object within 3s before the current time may be obtained.
It should be noted that, in this step, the trajectory point set may be actively monitored by the executing entity, or may be obtained by requesting data from other processors or acquisition devices. For example, if the processor a in the main controller of the vehicle 1 is the execution subject, the coordinate point of the vehicle 1 may be acquired by a locator of the processor a, such as a GPS, the acquired coordinate data is sent to the processor a by the locator of the vehicle 1, and the processor a performs coordinate conversion to obtain a track point set of the vehicle 1; the track point sets of other vehicles can be obtained by requesting from other processors, for example, if there is communication with other vehicles, the track point sets can be obtained from other vehicles respectively; for another example, a set of trajectory points of other vehicles may be acquired from a road monitor in the area; in addition, the track point set of other vehicles can be obtained through calculation in a mode that images of other vehicles are collected by the vehicle and the distance between the vehicle and the vehicle is calculated. The implementation manner can be various and is not described in detail.
Furthermore, the data source (or direct acquisition source) of the set of trajectory points may be different from the data source (or direct acquisition source) of the global area image in the aforementioned S202-12.
S202-24, the track point sets of the moving objects are coded to obtain the track characteristics of the moving objects.
This step can also be implemented by a neural network model, where the set of trajectory points of each moving object (e.g., the set of trajectory points of each moving object within 3S) obtained in the foregoing S202-22 is input into a coding model (a trained neural network model, for example, an LSTM model as shown in fig. 4 may be used), and the output of the coding model is the trajectory characteristics (encoder) of each moving object.
In particularFor any moving object, the trajectory characteristics can be expressed as: encoder { (X) { (LSTM {)i,Yi)}. The length of the track feature (encoder) may be assumed to be C, and the value of C is generally a preset empirical value.
And S202-26, constructing a track tensor as the global track data according to the track characteristics of each moving object.
Based on the track features of each moving object obtained in the steps S202-24, a track tensor (tensr) with the size of C × H × W may be constructed, and the track features (encoders) of each moving object may be correspondingly stored in the tensor. Specifically, for any moving object, the encoder of the moving object is correspondingly stored in the central position of the moving object. In one possible design, the meshes may be divided in the Tensor in the manner shown in fig. 1, so that the center position of the moving object is located in which mesh in the Tensor, the encoder of the moving object is correspondingly stored in the mesh.
Through the implementation manner shown in fig. 8, the global semantic data and the global track data can be acquired. In the foregoing implementation, the global semantic data may be represented as an image W × H, and the global trajectory data may be represented as a tenor with a size C × H × W, so that the two can be merged into a merged tenor with a size (C +1) H × W when the merging step described in S204 is performed. This fusion tenasor can be specifically expressed as: tensor ((C +1) × H × W).
Based on the global fusion data obtained in the previous steps, the global features including the object to be predicted can be obtained only by performing feature extraction on the global fusion data. When the method is specifically implemented, the method can also be implemented through a neural network model. That is, the fusion information is processed by using a trained feature extraction model to obtain the global features. The feature extraction model according to the embodiment of the present invention at least includes: a Convolutional Neural Networks (CNN) model as shown in fig. 4. Similar to the way of data processing using neural network models described above, the CNN model needs to be trained using feature extraction samples before this step is performed. The model training process is not described in detail.
Similarly, before S208 is executed, training learning for the trajectory prediction model also needs to be completed. In a specific implementation scenario, training of the trajectory prediction model (and the neural network model involved in the foregoing implementation manners) is generally completed before the present scheme is executed, so as to efficiently implement trajectory prediction in real time.
When the training of the trajectory prediction model is specifically realized, a first trajectory prediction model for predicting a single moving object (object to be predicted) according to the global features can be trained. For the object to be predicted, the single prediction mode has higher processing efficiency and is beneficial to path planning and scheduling in a real-time scene.
Alternatively, a second trajectory prediction model may be trained that predicts all moving objects included in the region from the global features. When a second trajectory prediction model is used for processing global features, the global features are input into the second trajectory prediction model, the motion trajectories of all moving objects in the area output by the second trajectory prediction model are obtained, and the motion trajectories of the objects to be predicted in all the moving objects are used as the target trajectories. The global prediction mode can output the motion tracks of all the moving objects in the area at one time, is favorable for realizing global scheduling, is also favorable for reducing the occurrence probability of accidents in the scheduling or path planning process, and improves the safety.
In addition, a third trajectory prediction model for predicting a plurality of (not all) moving objects included in the region according to the global features may be trained, and the model training mode and the implementation mode of S208 are the same as above and are not described again.
In summary, based on different designs of the trained trajectory prediction models, the technical solution provided by the embodiment of the present invention can not only realize trajectory prediction for a single moving object, but also realize trajectory prediction for multiple moving objects. The track prediction model has no dependence on the object type, and the trained track prediction model can realize the track prediction of the moving object of any type, has higher flexibility, and can be suitable for the track prediction in scenes with various objects.
In addition, in some special implementation scenarios, as in the existing implementation manner, the trajectory prediction models corresponding to the various types of moving objects may be trained respectively. That is, the technical solution provided by the embodiment of the present invention can also implement personalized prediction for each category of moving objects. When the respective track prediction models are trained for various moving objects, the sample data is the related data of the objects of the category.
As described above, after the target trajectory of the object to be predicted is obtained through the foregoing implementation manners, the target trajectory may be used for further processing.
In one possible design, a motion plan may be performed for the object to be predicted according to the target trajectory. That is, further path planning is implemented according to the predicted trajectory of the object to be predicted.
In another possible design, motion planning may be performed for other moving objects according to the target trajectory. That is, in the process of planning a path for one or more other moving objects, a route can be planned according to the predicted trajectory of the object to be predicted, so as to avoid collision or other safety accidents with the object to be predicted.
And then, according to the planned motion path, the scheduling of the motion object is realized.
It is to be understood that some or all of the steps or operations in the above-described embodiments are merely examples, and other operations or variations of various operations may be performed by the embodiments of the present application. Further, the various steps may be performed in a different order presented in the above-described embodiments, and it is possible that not all of the operations in the above-described embodiments are performed.
Example two
Based on the trajectory prediction method provided in the first embodiment, the embodiment of the present invention further provides an embodiment of an apparatus for implementing each step and method in the above method embodiment.
Referring to fig. 9, a trajectory prediction apparatus 600 according to an embodiment of the present invention includes:
an obtaining module 61, configured to obtain global semantic data and global trajectory data of an area where an object to be predicted is located;
a fusion module 62, configured to fuse the global semantic data and the global trajectory data to obtain global fusion data;
a feature extraction module 63, configured to extract features in the global fusion data to obtain global features;
and the prediction module 64 is configured to process the global feature by using the trained trajectory prediction model to obtain the target trajectory of the object to be predicted.
In one possible design, the obtaining module 61 is specifically configured to:
acquiring a global area image of an area where the object to be predicted is located;
performing semantic recognition on each pixel in the global area image to obtain the semantic category of each pixel;
and according to the semantic category of each pixel, performing semantic annotation on the global area image to obtain the global semantic information.
The obtaining module 61 is further specifically configured to:
according to the semantic category of each pixel, performing semantic segmentation on the global area image to obtain a plurality of semantic objects;
and performing semantic annotation on each semantic object to obtain the global semantic information.
The global area image related to the embodiment of the invention is a digital orthophoto DOM image.
In another possible design, the obtaining module 61 is specifically configured to:
acquiring a track point set of each moving object in the area of the object to be predicted, wherein the track point set is formed by collecting coordinate points of the moving objects according to a time sequence;
coding the track point set of each moving object to obtain the track characteristics of each moving object;
and constructing a track tensor according to the track characteristics of each moving object to serve as the global track data.
In one possible design, the fusion module 63 is specifically configured to:
and processing the fusion information by using a trained feature extraction model to obtain the global features.
The feature extraction model according to the embodiment of the present invention at least includes: convolutional neural network CNN model.
The trajectory prediction model according to the embodiment of the present invention includes at least one of: a long and short term memory network LSTM model and a multilayer perceptron MLP;
wherein the multilayer perceptron comprises: a Recurrent Neural Network (RNN) model and a Gated Recurrent Unit (GRU) model.
In one possible design, the trajectory prediction model is a first trajectory prediction model, and the first trajectory prediction model is used for predicting a target trajectory of the object to be predicted.
In another possible design, the trajectory prediction model is a second trajectory prediction model, and the second trajectory prediction model is used for predicting the motion trajectories of all the moving objects in the area; in this case, the prediction module 64 is specifically configured to:
inputting the global features into the second track prediction model, obtaining the motion tracks of all the moving objects in the area output by the second track prediction model, and taking the motion tracks of the objects to be predicted in all the moving objects as the target track.
In an embodiment of the present invention, the object to be predicted includes at least one of: vehicles, animals, humans, robots and unmanned aerial vehicles.
In addition, in one or possible design, the trajectory prediction apparatus 600 may further include:
a planning module (not shown in fig. 9) for planning the motion of the object to be predicted according to the target trajectory.
The trajectory prediction apparatus 600 of the embodiment shown in fig. 9 may be used to implement the technical solution of the above method embodiment, and the implementation principle and the technical effect of the method embodiment may further refer to the relevant description in the method embodiment, and optionally, the trajectory prediction apparatus 600 may be a terminal device or a background server, etc.
It should be understood that the above division of the modules of the trajectory prediction apparatus 600 shown in fig. 9 is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling by the processing element in software, and part of the modules can be realized in the form of hardware. For example, the prediction module 64 may be a processing element separately installed, or may be integrated into the trajectory prediction apparatus 600, for example, implemented in a chip of the terminal, or may be stored in the memory of the trajectory prediction apparatus 600 in the form of a program, and the function of each of the above modules may be called and executed by a processing element of the trajectory prediction apparatus 600. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when some of the above modules are implemented in the form of a processing element scheduler, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling programs. As another example, these modules may be integrated together, implemented in the form of a system-on-a-chip (SOC).
Also, an embodiment of the present invention provides a trajectory prediction apparatus, referring to fig. 10, the trajectory prediction apparatus 600 includes:
a memory 610;
a processor 620; and
a computer program;
wherein the computer program is stored in the memory 610 and configured to be executed by the processor 620 to implement the methods as described in the above embodiments.
The number of the processors 620 in the trajectory prediction apparatus 600 may be one or more, and the processors 620 may also be referred to as processing units, which may implement a certain control function. The processor 620 may be a general purpose processor, a special purpose processor, or the like. In an alternative design, the processor 620 may also store instructions that can be executed by the processor 620 to cause the trajectory prediction apparatus 600 to perform the trajectory prediction method described in the above method embodiment.
In yet another possible design, the trajectory prediction means 600 may comprise a circuit that may implement the functions of transmitting or receiving or communicating in the foregoing method embodiments.
Optionally, the number of the memories 610 in the trajectory prediction device 600 may be one or more, and the memories 610 have instructions or intermediate data stored thereon, and the instructions may be executed on the processor 620, so that the trajectory prediction device 600 performs the method described in the above method embodiments. Optionally, other related data may also be stored in the memory 610. Optionally, instructions and/or data may also be stored in processor 620. The processor 620 and the memory 610 may be provided separately or may be integrated together.
In addition, as shown in fig. 10, a transceiver 630 is further disposed in the trajectory prediction apparatus 600, where the transceiver 630 may be referred to as a transceiver unit, a transceiver circuit, or a transceiver, and is used for data transmission or communication with a test device or other terminal devices, and will not be described herein again.
As shown in fig. 10, the memory 610, the processor 620, and the transceiver 630 are connected by a bus and communicate.
If the trajectory prediction device 600 is used to implement the method corresponding to fig. 2, for example, the transceiver 630 may obtain global semantic data and global trajectory data. And the processor 620 is used to perform corresponding determination or control operations, and optionally, corresponding instructions may also be stored in the memory 610. The specific processing manner of each component can be referred to the related description of the previous embodiment.
Furthermore, an embodiment of the present invention provides a readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method according to the first embodiment.
In addition, an embodiment of the present invention provides a driving system, please refer to fig. 11, where the driving system 800 includes:
a trajectory prediction device 600 for performing the method according to any one of the embodiments;
and the motion controller 810 is configured to control the controlled object to move according to the target trajectory acquired by the trajectory prediction device.
The controlled object and the object to be predicted are the same object. For example, in a possible design, in a scenario of predicting a vehicle's own trajectory and planning a route, the motion controller may plan its own driving route according to its own target trajectory predicted by the trajectory prediction device 600; and, further, automatic movement of the controlled object, that is, automatic driving can be realized.
In addition, the controlled object and the object to be predicted may be different objects. Taking the above scenario as an example, the vehicle may predict the trajectory of another vehicle that is relatively close to itself on the road surface, so that when performing the motion control of itself, the vehicle may avoid collision with another vehicle or a moving obstacle (vehicle, person, animal, etc.) as much as possible, so as to reduce the occurrence probability of a safety accident, and improve the safety.
In another specific implementation scenario, the motion controller 810 may also output the target trajectory acquired by the trajectory prediction apparatus 600, so that the user may use the target trajectory as a reference when driving or controlling the controlled object to move. Especially in the scene of a plurality of objects, the target track of other objects to be predicted is predicted, so that the control safety in the scene of the plurality of objects is improved.
Specifically, the controlled object may include, but is not limited to, at least one of: vehicles, animals, humans, robots and unmanned aerial vehicles. In the embodiment of the present invention, the number of the controlled objects is not particularly limited, and may be one or more. For example, the motion controller may be a vehicle running controller, or may be a flight controller of an unmanned aerial vehicle, and the like, and will not be described in detail.
In addition, the embodiment of the invention provides a vehicle.
Referring to fig. 12, the vehicle 900 includes:
the trajectory prediction device 600 is configured to perform the method according to any one of the implementation manners of the first embodiment.
Alternatively, as shown in fig. 13, the vehicle 900 includes:
such as the steering system 800 shown in fig. 11.
In addition, the embodiment of the invention also provides a control device of the unmanned aerial vehicle.
In one possible design, the control device of the unmanned aerial vehicle includes:
the trajectory prediction device 600 is configured to perform the method according to any one of the implementation manners of the first embodiment.
In another design, the control device for the unmanned aerial vehicle includes:
a steering system 800.
Specifically, the unmanned aerial vehicle and the control device of the unmanned aerial vehicle may be designed independently or in combination (the control device is disposed inside the unmanned aerial vehicle), and the embodiment of the present invention is not particularly limited thereto.
It can be known that the control devices of the vehicle and the unmanned aerial vehicle are controlled objects capable of carrying the trajectory prediction device, such as the aforementioned book, and in addition, further include robots or robot toys, etc., which are not described in detail.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Since each module in this embodiment can execute the method shown in the first embodiment, reference may be made to the related description of the first embodiment for a part of this embodiment that is not described in detail.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (17)

1. A trajectory prediction method, comprising:
acquiring global semantic data and global track data of an area where an object to be predicted is located;
fusing the global semantic data and the global track data to obtain global fusion data;
extracting features in the global fusion data to obtain global features;
and processing the global characteristics by using a trained track prediction model to obtain the target track of the object to be predicted.
2. The method according to claim 1, wherein the obtaining global semantic data of an area where the object to be predicted is located comprises:
acquiring a global area image of an area where the object to be predicted is located;
performing semantic recognition on each pixel in the global area image to obtain the semantic category of each pixel;
and according to the semantic category of each pixel, performing semantic annotation on the global area image to obtain the global semantic information.
3. The method according to claim 2, wherein the semantically labeling the global region image according to the semantic category of each pixel to obtain the global semantic information comprises:
according to the semantic category of each pixel, performing semantic segmentation on the global area image to obtain a plurality of semantic objects;
and performing semantic annotation on each semantic object to obtain the global semantic information.
4. The method according to claim 2 or 3, wherein the global area image is a Digital Orthophoto (DOM) image.
5. The method according to claim 1, wherein the obtaining global trajectory data of an area where the object to be predicted is located comprises:
acquiring a track point set of each moving object in the area of the object to be predicted, wherein the track point set is formed by collecting coordinate points of the moving objects according to a time sequence;
coding the track point set of each moving object to obtain the track characteristics of each moving object;
and constructing a track tensor according to the track characteristics of each moving object to serve as the global track data.
6. The method according to any one of claims 1-3 and 5, wherein the extracting features in the global fusion data to obtain global features comprises:
and processing the fusion information by using a trained feature extraction model to obtain the global features.
7. The method of claim 6, wherein the feature extraction model comprises at least: convolutional neural network CNN model.
8. The method of claim 1, wherein the trajectory prediction model comprises at least one of: a long and short term memory network LSTM model and a multilayer perceptron MLP;
wherein the multilayer perceptron comprises: a Recurrent Neural Network (RNN) model and a Gated Recurrent Unit (GRU) model.
9. The method according to claim 1 or 8, wherein the trajectory prediction model is a first trajectory prediction model for predicting a target trajectory of the object to be predicted.
10. The method according to claim 1 or 8, wherein the trajectory prediction model is a second trajectory prediction model for predicting the motion trajectories of all moving objects in the area;
the processing the global feature by using the trained track prediction model to obtain the target track of the object to be predicted comprises the following steps:
inputting the global features into the second track prediction model, obtaining the motion tracks of all the moving objects in the area output by the second track prediction model, and taking the motion tracks of the objects to be predicted in all the moving objects as the target track.
11. The method of claim 1, wherein the object to be predicted comprises at least one of: vehicles, animals, humans, robots and unmanned aerial vehicles.
12. The method of claim 1, further comprising:
and performing motion planning on the object to be predicted according to the target track.
13. A trajectory prediction device, comprising:
the acquisition module is used for acquiring global semantic data and global track data of an area where an object to be predicted is located;
the fusion module is used for fusing the global semantic data and the global track data to obtain global fusion data;
the feature extraction module is used for extracting features in the global fusion data to obtain global features;
and the prediction module is used for processing the global characteristics by utilizing a trained track prediction model to obtain the target track of the object to be predicted.
14. A trajectory prediction device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any one of claims 1 to 12.
15. A computer-readable storage medium, having stored thereon a computer program,
the computer program is executed by a processor to implement the method of any one of claims 1 to 12.
16. A driving system, comprising:
trajectory prediction means for performing the method of any one of claims 1 to 12;
and the motion controller is used for controlling the controlled object to move according to the target track acquired by the track prediction device.
17. A vehicle, characterized by comprising:
the driving system of claim 16.
CN201980005403.6A 2019-03-27 2019-03-27 Track prediction method and device, storage medium, driving system and vehicle Active CN111316286B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/079780 WO2020191642A1 (en) 2019-03-27 2019-03-27 Trajectory prediction method and apparatus, storage medium, driving system and vehicle

Publications (2)

Publication Number Publication Date
CN111316286A true CN111316286A (en) 2020-06-19
CN111316286B CN111316286B (en) 2024-09-10

Family

ID=71161143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980005403.6A Active CN111316286B (en) 2019-03-27 2019-03-27 Track prediction method and device, storage medium, driving system and vehicle

Country Status (2)

Country Link
CN (1) CN111316286B (en)
WO (1) WO2020191642A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112653997A (en) * 2020-12-29 2021-04-13 西安九索数据技术股份有限公司 Position track calculation method based on base station sequence
CN112785466A (en) * 2020-12-31 2021-05-11 科大讯飞股份有限公司 AI enabling method and device of hardware, storage medium and equipment
CN112801059A (en) * 2021-04-07 2021-05-14 广东众聚人工智能科技有限公司 Graph convolution network system and 3D object detection method based on graph convolution network system
CN114067556A (en) * 2020-08-05 2022-02-18 北京万集科技股份有限公司 Environment sensing method, device, server and readable storage medium
CN114194213A (en) * 2021-12-29 2022-03-18 北京三快在线科技有限公司 Target object trajectory prediction method and device, storage medium and electronic equipment
CN114312831A (en) * 2021-12-16 2022-04-12 浙江零跑科技股份有限公司 Vehicle track prediction method based on space attention mechanism
CN115790606A (en) * 2023-01-09 2023-03-14 深圳鹏行智能研究有限公司 Trajectory prediction method, trajectory prediction device, robot, and storage medium

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112712013B (en) * 2020-12-29 2024-01-05 杭州海康威视数字技术股份有限公司 Method and device for constructing moving track
CN112699942B (en) * 2020-12-30 2024-08-02 东软睿驰汽车技术(沈阳)有限公司 Method, device, equipment and storage medium for identifying operation vehicle
CN112885079B (en) * 2021-01-11 2022-11-29 成都语动未来科技有限公司 Vehicle track prediction method based on global attention and state sharing
CN113011323B (en) 2021-03-18 2022-11-29 阿波罗智联(北京)科技有限公司 Method for acquiring traffic state, related device, road side equipment and cloud control platform
CN113139696B (en) * 2021-05-11 2022-09-20 深圳大学 Trajectory prediction model construction method and trajectory prediction method and device
CN113276858A (en) * 2021-05-13 2021-08-20 际络科技(上海)有限公司 Fuel-saving driving control method and device, computing equipment and storage medium
CN113759400B (en) * 2021-08-04 2024-02-27 江苏怀业信息技术股份有限公司 Method and device for smoothing satellite positioning track
CN113743767B (en) * 2021-08-27 2022-11-04 广东工业大学 Vehicle dispatching method, system, computer and medium based on time and safety
CN113934808B (en) * 2021-10-22 2024-05-28 广东汇天航空航天科技有限公司 Map data acquisition method and device and aircraft
CN114512052B (en) * 2021-12-31 2023-06-02 武汉中海庭数据技术有限公司 Method and device for generating diverging and converging intersections by fusing remote sensing images and track data
CN114998744B (en) * 2022-07-18 2022-10-25 中国农业大学 Agricultural machinery track field dividing method and device based on motion and vision dual-feature fusion
CN118258406B (en) * 2024-05-29 2024-08-13 浙江大学湖州研究院 Automatic guided vehicle navigation method and device based on visual language model

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107144285A (en) * 2017-05-08 2017-09-08 深圳地平线机器人科技有限公司 Posture information determines method, device and movable equipment
CN108734713A (en) * 2018-05-18 2018-11-02 大连理工大学 A kind of traffic image semantic segmentation method based on multi-characteristic
CN108803617A (en) * 2018-07-10 2018-11-13 深圳大学 Trajectory predictions method and device
US20180374359A1 (en) * 2017-06-22 2018-12-27 Bakhi.com Times Technology (Beijing) Co., Ltd. Evaluation framework for predicted trajectories in autonomous driving vehicle traffic prediction
US20190049957A1 (en) * 2018-03-30 2019-02-14 Intel Corporation Emotional adaptive driving policies for automated driving vehicles
US20190049970A1 (en) * 2017-08-08 2019-02-14 Uber Technologies, Inc. Object Motion Prediction and Autonomous Vehicle Control
US20190049987A1 (en) * 2017-08-08 2019-02-14 Uber Technologies, Inc. Object Motion Prediction and Autonomous Vehicle Control
KR101951595B1 (en) * 2018-05-18 2019-02-22 한양대학교 산학협력단 Vehicle trajectory prediction system and method based on modular recurrent neural network architecture

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180112985A1 (en) * 2016-10-26 2018-04-26 The Charles Stark Draper Laboratory, Inc. Vision-Inertial Navigation with Variable Contrast Tracking Residual
CN107422730A (en) * 2017-06-09 2017-12-01 武汉市众向科技有限公司 The AGV transportation systems of view-based access control model guiding and its driving control method
CN107168342B (en) * 2017-07-12 2020-04-07 哈尔滨工大智慧工厂有限公司 Pedestrian trajectory prediction method for robot path planning
CN108022012A (en) * 2017-12-01 2018-05-11 兰州大学 Vehicle location Forecasting Methodology based on deep learning
CN108981726A (en) * 2018-06-09 2018-12-11 安徽宇锋智能科技有限公司 Unmanned vehicle semanteme Map building and building application method based on perceptual positioning monitoring

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107144285A (en) * 2017-05-08 2017-09-08 深圳地平线机器人科技有限公司 Posture information determines method, device and movable equipment
US20180374359A1 (en) * 2017-06-22 2018-12-27 Bakhi.com Times Technology (Beijing) Co., Ltd. Evaluation framework for predicted trajectories in autonomous driving vehicle traffic prediction
US20190049970A1 (en) * 2017-08-08 2019-02-14 Uber Technologies, Inc. Object Motion Prediction and Autonomous Vehicle Control
US20190049987A1 (en) * 2017-08-08 2019-02-14 Uber Technologies, Inc. Object Motion Prediction and Autonomous Vehicle Control
US20190049957A1 (en) * 2018-03-30 2019-02-14 Intel Corporation Emotional adaptive driving policies for automated driving vehicles
CN108734713A (en) * 2018-05-18 2018-11-02 大连理工大学 A kind of traffic image semantic segmentation method based on multi-characteristic
KR101951595B1 (en) * 2018-05-18 2019-02-22 한양대학교 산학협력단 Vehicle trajectory prediction system and method based on modular recurrent neural network architecture
CN108803617A (en) * 2018-07-10 2018-11-13 深圳大学 Trajectory predictions method and device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067556A (en) * 2020-08-05 2022-02-18 北京万集科技股份有限公司 Environment sensing method, device, server and readable storage medium
CN114067556B (en) * 2020-08-05 2023-03-14 北京万集科技股份有限公司 Environment sensing method, device, server and readable storage medium
CN112653997A (en) * 2020-12-29 2021-04-13 西安九索数据技术股份有限公司 Position track calculation method based on base station sequence
CN112785466A (en) * 2020-12-31 2021-05-11 科大讯飞股份有限公司 AI enabling method and device of hardware, storage medium and equipment
CN112801059A (en) * 2021-04-07 2021-05-14 广东众聚人工智能科技有限公司 Graph convolution network system and 3D object detection method based on graph convolution network system
CN114312831A (en) * 2021-12-16 2022-04-12 浙江零跑科技股份有限公司 Vehicle track prediction method based on space attention mechanism
CN114312831B (en) * 2021-12-16 2023-10-03 浙江零跑科技股份有限公司 Vehicle track prediction method based on spatial attention mechanism
CN114194213A (en) * 2021-12-29 2022-03-18 北京三快在线科技有限公司 Target object trajectory prediction method and device, storage medium and electronic equipment
CN115790606A (en) * 2023-01-09 2023-03-14 深圳鹏行智能研究有限公司 Trajectory prediction method, trajectory prediction device, robot, and storage medium

Also Published As

Publication number Publication date
WO2020191642A1 (en) 2020-10-01
CN111316286B (en) 2024-09-10

Similar Documents

Publication Publication Date Title
CN111316286B (en) Track prediction method and device, storage medium, driving system and vehicle
US11531346B2 (en) Goal-directed occupancy prediction for autonomous driving
CN111860155B (en) Lane line detection method and related equipment
CN112212874B (en) Vehicle track prediction method and device, electronic equipment and computer readable medium
JP2023511765A (en) Trajectory prediction method, device, equipment, storage medium and program
Laugier et al. Probabilistic analysis of dynamic scenes and collision risks assessment to improve driving safety
Srikanth et al. Infer: Intermediate representations for future prediction
GB2562049A (en) Improved pedestrian prediction by using enhanced map data in automated vehicles
CN114581870B (en) Track planning method, apparatus, device and computer readable storage medium
CN114821507A (en) Multi-sensor fusion vehicle-road cooperative sensing method for automatic driving
US11556126B2 (en) Online agent predictions using semantic maps
CN114792148A (en) Method and device for predicting motion trail
Xia et al. A human-like traffic scene understanding system: A survey
CN114882457A (en) Model training method, lane line detection method and equipment
Kanchana et al. Computer vision for autonomous driving
JP2024019629A (en) Prediction device, prediction method, program and vehicle control system
Solmaz et al. Learn from IoT: pedestrian detection and intention prediction for autonomous driving
DE102023114042A1 (en) Image-based pedestrian speed estimation
Tewari et al. AI-based autonomous driving assistance system
CN110446106B (en) Method for identifying front camera file, electronic equipment and storage medium
Zürn et al. Autograph: Predicting lane graphs from traffic observations
Rajasekaran et al. Artificial Intelligence in Autonomous Vehicles—A Survey of Trends and Challenges
Wan et al. Fusing onboard modalities with V2V information for autonomous driving
CN117765226B (en) Track prediction method, track prediction device and storage medium
CN115909286B (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240515

Address after: Building 3, Xunmei Science and Technology Plaza, No. 8 Keyuan Road, Science and Technology Park Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518057, 1634

Applicant after: Shenzhen Zhuoyu Technology Co.,Ltd.

Country or region after: China

Address before: 518057 Shenzhen Nanshan High-tech Zone, Shenzhen, Guangdong Province, 6/F, Shenzhen Industry, Education and Research Building, Hong Kong University of Science and Technology, No. 9 Yuexingdao, South District, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SZ DJI TECHNOLOGY Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant