CN112541449A - Pedestrian trajectory prediction method based on unmanned aerial vehicle aerial photography view angle - Google Patents
Pedestrian trajectory prediction method based on unmanned aerial vehicle aerial photography view angle Download PDFInfo
- Publication number
- CN112541449A CN112541449A CN202011505987.4A CN202011505987A CN112541449A CN 112541449 A CN112541449 A CN 112541449A CN 202011505987 A CN202011505987 A CN 202011505987A CN 112541449 A CN112541449 A CN 112541449A
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- track
- interaction
- prediction
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a pedestrian trajectory prediction method based on an unmanned aerial vehicle aerial photography view angle, which comprises the following steps of 1: the method comprises the steps that pedestrian positions are obtained through a target detection algorithm in pedestrian track preprocessing, and a pedestrian position sequence within a period of time is quickly obtained through a target tracking algorithm; step 2: the track coding uses a long-short term memory network to code a track sequence with a period of time to obtain track motion characteristics; and step 3: the graph convolution network interaction construction takes each pedestrian coordinate as a vertex of the graph convolution network, and the graph convolution network is used for constructing an interaction relation among pedestrians to obtain a track interaction characteristic; and 4, step 4: optimizing maximum mutual information; and 5: and decoding the track motion characteristics and the track interaction characteristics by using a long-term and short-term memory network to obtain a prediction sequence with a certain duration, and completing the track prediction. Compared with the prior art, the method and the device have the advantages that the technical effect of constructing the interaction mode among the pedestrians and predicting the track is achieved, and the robustness is good.
Description
Technical Field
The invention relates to the field of intelligent robots and unmanned platforms, in particular to a pedestrian trajectory prediction method based on an unmanned aerial vehicle aerial photography view angle.
Background
In dense pedestrian scenes such as urban streets and the like, self paths of moving bodies such as automatic driving vehicles and robots need to be planned according to the positions of other pedestrians, safe distances can be kept and risk factors are eliminated through position prediction of targets, and the accuracy of future position prediction of the pedestrians is very important for a decision-making system of the moving bodies. The pedestrian trajectory prediction is a complex task, as the motion habits of each pedestrian are naturally different, and the group environment has human-human interaction, the motion mode of the individual is influenced by the hidden effect of the pedestrians around, people can adjust the route of the individual according to the common knowledge in the aspect of social rules, and the motion subject needs to predict the actions and social behaviors of other people. The construction of pedestrian interaction patterns with high interpretability and generalization capability is the key point of the trajectory prediction problem.
The intensive pedestrian scene at road surface visual angle has a large amount of scheduling problems of sheltering from to ordinary monocular camera is very limited to the judgment ability of distance, and unmanned aerial vehicle can obtain pedestrian's horizontal position information in a flexible way, consequently uses the unmanned aerial vehicle visual angle of taking photo by plane can obtain pedestrian's position and carry out the orbit prediction work high-efficiently.
In the existing computer vision method, the graph neural network applies deep learning on a non-Euclidean structure, constructs the relation between vertex and edge representation objects, shows good robustness and interpretability, and is an effective mode for modeling an interaction mode between pedestrians through a graph topological structure.
Disclosure of Invention
In consideration of the advantages and problems of the convolution network in the establishment of an interaction model, the invention provides a pedestrian trajectory prediction method based on an unmanned aerial vehicle aerial photography visual angle, and realizes a new convolution neural network trajectory prediction model based on the unmanned aerial vehicle aerial photography visual angle, so that an interaction mode among pedestrians is established and trajectory prediction is carried out.
1. The invention discloses a pedestrian trajectory prediction method based on an unmanned aerial vehicle aerial photography view angle, which is characterized by comprising the following steps:
step 1: carry out pedestrian's orbit preliminary treatment in the pedestrian video of unmanned aerial vehicle aerial photography, including fixing a position the pedestrian fast, the central point who gets the target frame promptly is pedestrian's position, establishes all orbit coordinates X of surveing the pedestrian and is X ═ X1,X2,…,Xn;
Step 2: and (3) carrying out pedestrian track coding: representing the relative position change of a single pedestrian track between the previous frame and the next frameComprises the following steps:encoding to fixed length motion vectors using long short term memory networksThen using long-short term memory network to encode to obtain the trace motion characteristics
And step 3: constructing graph convolution network interaction: using graph structure Gt=(Vt,Et) Establishing an interactive model among pedestrians at the time t, and taking the pedestrians as a set V of vertexes in a graph structuretThe interaction relation among the pedestrians is a set E of edgestThe vertex V in each time pointtConnection relation E oftExpressed as adjacency matrix AtWill adjoin the matrix AtEdge of (1)Weights assigned according to different distancesExpressed as:
characteristics of the path movementInput features as vertices in graph convolution networksOverlapping two layers of graph convolution networks, and obtaining the output characteristic of the ith track through a two-layer GCN structureOutput characteristics of pair-to-figure convolution networkCarrying out long-short term memory network coding to obtain the track interaction characteristics
And 4, step 4: the method for realizing the maximum mutual information between the local features and the global features of the track interaction features comprises the following specific processes: firstly, making negative sample of convolution network inputObtaining output by a graph convolution networkSimultaneous extraction of global featuresThe judger D is then trained so that it can output negative examplesMisjudging and matching the output Z of the positive sample, thereby training the loss function L of the discriminatorinf,LinfExpressed as:
through the training process, the extraction result of the graph convolution network is optimized;
and 5: and (3) carrying out track prediction: using long and short term memory network to characterize trajectory motionAnd trajectory interaction featuresDecoding is carried outOutputting a frame of two-dimensional pedestrian trajectory prediction, and determining whether the total output length reaches the prediction sequence length? If not, adding a new output frame into the input sequence, discarding the input of the first frame, if so, outputting the prediction sequence, thereby obtaining the prediction sequence with a certain time length and completing the track prediction.
Compared with the prior art, the invention realizes the technical effect of constructing the interaction mode among the pedestrians and carrying out the track prediction, and the prediction result has good robustness.
Drawings
FIG. 1 is an overall flow chart of a pedestrian trajectory prediction method based on an unmanned aerial vehicle aerial photography view angle according to the present invention;
FIG. 2 is a schematic diagram of a model framework structure of an embodiment of a pedestrian trajectory prediction method based on an unmanned aerial vehicle aerial photography view angle;
FIG. 3 is a schematic diagram of a track prediction live-action in which the solid lines are observed historical tracks, the dark dotted lines are actual future tracks, the light dotted lines are predicted future tracks, and two pedestrians on the right side of the graph (a) walk from right to left, and one pedestrian on the left side walks from left to right; in the figure (b), three pedestrians walk from right to left. The basic coincidence with the light-colored dotted line can be observed from the figure, which shows that the prediction effect of the patent is better
Detailed Description
The technical solution of the present invention will be described in detail below with reference to the accompanying drawings.
The overall idea of the invention is to realize the prediction of the pedestrian track by adopting the overlooking pedestrian video obtained based on the aerial photography of the unmanned aerial vehicle.
As shown in fig. 1, the method mainly comprises the following steps:
step 1: carrying out pedestrian track preprocessing in the pedestrian video aerial photographed by the unmanned aerial vehicle: the pedestrian video that unmanned aerial vehicle was taken photo by plane contains the pedestrian of a plurality of overlooking visual angles, uses existing target detection and target tracking method to fix a position the pedestrian fast, promptly: taking the central position of the target frame as the position of the pedestrian, and setting the track coordinates X of all observed pedestrians as X1,X2,…,XnExtracting two-dimensional position sequence of pedestrian, setting input sequence and prediction sequence length;
Step 2: and (3) carrying out pedestrian track coding: the relative position change of a single pedestrian trajectory between the previous frame and the next frame is expressed as:encoding to fixed length motion vectors using long short term memory networksThen coding the motion vector by using a long-short term memory network to obtain the track motion characteristics
And step 3: constructing a graph convolution network interaction model: using graph structure Gt=(Vt,Et) Establishing an interactive model among pedestrians at the time t, and taking the pedestrians as a set V of vertexes in a graph structuretThe interaction relation among the pedestrians is a set E of edgestThe vertex V in each time pointtConnection relation E oftExpressed as adjacency matrix AtWill adjoin the matrix AtEdge of (1)Weights assigned according to different distancesExpressed as:
characteristics of the path movementInput features as vertices in graph convolution networksSuperposing two layers of graph convolution networks, and passing through two layers of GCN nodesConstructing output characteristics of the ith trackOutput characteristics of pair-to-figure convolution networkCarrying out long-short term memory network coding to obtain the track interaction characteristics
And 4, step 4: in order to enable the graph convolution network to construct a good pedestrian track interaction relationship, the maximum mutual information method is used for realizing the mutual information between the local features and the global features of the maximum track interaction features, namely the maximum mutual information optimization is realized, and the specific process is as follows: firstly, making negative sample of convolution network inputObtaining output by a graph convolution networkSimultaneous extraction of global featuresThe judger D is then trained so that it can output negative examplesMisjudging and matching the output Z of the positive sample, thereby training the loss function L of the discriminatorinf,LinfExpressed as:
through the training process, the extraction result of the graph convolution network is optimized;
and 5: and (3) carrying out track prediction: using long and short term memory network to characterize trajectory motionAnd trajectory interaction featuresDecoding is performed, a frame of two-dimensional pedestrian trajectory prediction is output, and it is determined whether the total output length reaches the prediction sequence length? If not, adding a new output frame into the input sequence, discarding the input of the first frame, if so, outputting the prediction sequence, thereby obtaining the prediction sequence with a certain time length and completing the track prediction.
Claims (1)
1. A pedestrian trajectory prediction method based on an unmanned aerial vehicle aerial photography view angle is characterized by specifically comprising the following steps:
step 1: carry out pedestrian's orbit preliminary treatment in the pedestrian video of unmanned aerial vehicle aerial photography, including fixing a position the pedestrian fast, the central point who gets the target frame promptly is pedestrian's position, establishes all orbit coordinates X of surveing the pedestrian and is X ═ X1,X2,…,Xn;
Step 2: and (3) carrying out pedestrian track coding: the relative position change of a single pedestrian trajectory between the previous frame and the next frame is expressed as:encoding to fixed length motion vectors using long short term memory networksThen using long-short term memory network to encode to obtain the trace motion characteristics
And step 3: constructing graph convolution network interaction: using graph structure Gt=(Vt,Et) Establishing an interactive model among pedestrians at the time t, and taking the pedestrians as a set V of vertexes in a graph structuretThe interaction relation among the pedestrians is a set E of edgestEach one of themVertex V in time pointtConnection relation E oftExpressed as adjacency matrix AtWill adjoin the matrix AtEdge of (1)Weights assigned according to different distances Expressed as:
characteristics of the path movementInput features V as vertices in graph convolution networksi tSuperposing two layers of graph convolution networks, and obtaining the output characteristic of the ith track through two layers of GCN structuresOutput characteristics of pair-to-figure convolution networkCarrying out long-short term memory network coding to obtain the track interaction characteristics
And 4, step 4: the method for realizing the maximum mutual information between the local features and the global features of the track interaction features comprises the following specific processes: firstly, making negative sample of convolution network input Obtaining output by a graph convolution networkSimultaneous extraction of global featuresThe judger D is then trained so that it can output negative examplesMisjudging and matching the output Z of the positive sample, thereby training the loss function L of the discriminatorinf,LinfExpressed as:
through the training process, the extraction result of the graph convolution network is optimized;
and 5: and (3) carrying out track prediction: using long and short term memory network to characterize trajectory motionAnd trajectory interaction featuresDecoding is performed, a frame of two-dimensional pedestrian trajectory prediction is output, and it is determined whether the total output length reaches the prediction sequence length? If not, adding a new output frame into the input sequence, discarding the input of the first frame, if so, outputting the prediction sequence, thereby obtaining the prediction sequence with a certain time length and completing the track prediction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011505987.4A CN112541449A (en) | 2020-12-18 | 2020-12-18 | Pedestrian trajectory prediction method based on unmanned aerial vehicle aerial photography view angle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011505987.4A CN112541449A (en) | 2020-12-18 | 2020-12-18 | Pedestrian trajectory prediction method based on unmanned aerial vehicle aerial photography view angle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112541449A true CN112541449A (en) | 2021-03-23 |
Family
ID=75019153
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011505987.4A Pending CN112541449A (en) | 2020-12-18 | 2020-12-18 | Pedestrian trajectory prediction method based on unmanned aerial vehicle aerial photography view angle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112541449A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113269054A (en) * | 2021-04-30 | 2021-08-17 | 重庆邮电大学 | Aerial video analysis method based on space-time 2D convolutional neural network |
CN113362367A (en) * | 2021-07-26 | 2021-09-07 | 北京邮电大学 | Crowd trajectory prediction method based on multi-precision interaction |
CN113435356A (en) * | 2021-06-30 | 2021-09-24 | 吉林大学 | Track prediction method for overcoming observation noise and perception uncertainty |
CN114827750A (en) * | 2022-05-31 | 2022-07-29 | 脸萌有限公司 | Method, device and equipment for predicting visual angle and storage medium |
CN114861554A (en) * | 2022-06-02 | 2022-08-05 | 广东工业大学 | Unmanned ship target track prediction method based on collective filtering |
CN116612493A (en) * | 2023-04-28 | 2023-08-18 | 深圳先进技术研究院 | Pedestrian geographic track extraction method and device |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564118A (en) * | 2018-03-30 | 2018-09-21 | 陕西师范大学 | Crowd scene pedestrian track prediction technique based on social affinity shot and long term memory network model |
CN110660082A (en) * | 2019-09-25 | 2020-01-07 | 西南交通大学 | Target tracking method based on graph convolution and trajectory convolution network learning |
CN111161322A (en) * | 2019-12-31 | 2020-05-15 | 大连理工大学 | LSTM neural network pedestrian trajectory prediction method based on human-vehicle interaction |
CN111339867A (en) * | 2020-02-18 | 2020-06-26 | 广东工业大学 | Pedestrian trajectory prediction method based on generation of countermeasure network |
CN111339449A (en) * | 2020-03-24 | 2020-06-26 | 青岛大学 | User motion trajectory prediction method, device, equipment and storage medium |
CN111401233A (en) * | 2020-03-13 | 2020-07-10 | 商汤集团有限公司 | Trajectory prediction method, apparatus, electronic device, and medium |
CN111428763A (en) * | 2020-03-17 | 2020-07-17 | 陕西师范大学 | Pedestrian trajectory prediction method based on scene constraint GAN |
CN111488815A (en) * | 2020-04-07 | 2020-08-04 | 中山大学 | Basketball game goal event prediction method based on graph convolution network and long-time and short-time memory network |
CN111612206A (en) * | 2020-03-30 | 2020-09-01 | 清华大学 | Street pedestrian flow prediction method and system based on space-time graph convolutional neural network |
CN111626198A (en) * | 2020-05-27 | 2020-09-04 | 多伦科技股份有限公司 | Pedestrian motion detection method based on Body Pix in automatic driving scene |
CN111931905A (en) * | 2020-07-13 | 2020-11-13 | 江苏大学 | Graph convolution neural network model and vehicle track prediction method using same |
-
2020
- 2020-12-18 CN CN202011505987.4A patent/CN112541449A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564118A (en) * | 2018-03-30 | 2018-09-21 | 陕西师范大学 | Crowd scene pedestrian track prediction technique based on social affinity shot and long term memory network model |
CN110660082A (en) * | 2019-09-25 | 2020-01-07 | 西南交通大学 | Target tracking method based on graph convolution and trajectory convolution network learning |
CN111161322A (en) * | 2019-12-31 | 2020-05-15 | 大连理工大学 | LSTM neural network pedestrian trajectory prediction method based on human-vehicle interaction |
CN111339867A (en) * | 2020-02-18 | 2020-06-26 | 广东工业大学 | Pedestrian trajectory prediction method based on generation of countermeasure network |
CN111401233A (en) * | 2020-03-13 | 2020-07-10 | 商汤集团有限公司 | Trajectory prediction method, apparatus, electronic device, and medium |
CN111428763A (en) * | 2020-03-17 | 2020-07-17 | 陕西师范大学 | Pedestrian trajectory prediction method based on scene constraint GAN |
CN111339449A (en) * | 2020-03-24 | 2020-06-26 | 青岛大学 | User motion trajectory prediction method, device, equipment and storage medium |
CN111612206A (en) * | 2020-03-30 | 2020-09-01 | 清华大学 | Street pedestrian flow prediction method and system based on space-time graph convolutional neural network |
CN111488815A (en) * | 2020-04-07 | 2020-08-04 | 中山大学 | Basketball game goal event prediction method based on graph convolution network and long-time and short-time memory network |
CN111626198A (en) * | 2020-05-27 | 2020-09-04 | 多伦科技股份有限公司 | Pedestrian motion detection method based on Body Pix in automatic driving scene |
CN111931905A (en) * | 2020-07-13 | 2020-11-13 | 江苏大学 | Graph convolution neural network model and vehicle track prediction method using same |
Non-Patent Citations (7)
Title |
---|
DAOGUANG LIU等: "A Method For Short-Term Traffic Flow Forecasting Based On GCN-LSTM", 《2020 INTERNATIONAL CONFERENCE ON COMPUTER VISION, IMAGE AND DEEP LEARNING (CVIDL)》 * |
FRANCO SCARSELLI等: "The Graph Neural Network Model", 《IEEE TRANSACTIONS ON NEURAL NETWORKS》 * |
HAO XUE等: "A Location-Velocity-Temporal Attention LSTM Model for Pedestrian Trajectory Prediction", 《IEEEACCESS》 * |
IAN J. GOODFELLOW等: "Generative Adversarial Nets", 《ARXIV:1406.2661V1 [STAT.ML]》 * |
LEGOLAS~: "生成式对抗网络的损失函数的理解", 《CSDN》 * |
YINGFAN HUANG等: "STGAT: Modeling Spatial-Temporal Interactions for Human Trajectory Prediction", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
ZHISHUAI LI等: "A Hybrid Deep Learning Approach with GCN and LSTM for Traffic Flow Prediction", 《2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC)》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113269054A (en) * | 2021-04-30 | 2021-08-17 | 重庆邮电大学 | Aerial video analysis method based on space-time 2D convolutional neural network |
CN113269054B (en) * | 2021-04-30 | 2022-06-10 | 重庆邮电大学 | Aerial video analysis method based on space-time 2D convolutional neural network |
CN113435356A (en) * | 2021-06-30 | 2021-09-24 | 吉林大学 | Track prediction method for overcoming observation noise and perception uncertainty |
CN113362367A (en) * | 2021-07-26 | 2021-09-07 | 北京邮电大学 | Crowd trajectory prediction method based on multi-precision interaction |
CN113362367B (en) * | 2021-07-26 | 2021-12-14 | 北京邮电大学 | Crowd trajectory prediction method based on multi-precision interaction |
CN114827750A (en) * | 2022-05-31 | 2022-07-29 | 脸萌有限公司 | Method, device and equipment for predicting visual angle and storage medium |
CN114827750B (en) * | 2022-05-31 | 2023-12-22 | 脸萌有限公司 | Viewing angle prediction method, device, equipment and storage medium |
CN114861554A (en) * | 2022-06-02 | 2022-08-05 | 广东工业大学 | Unmanned ship target track prediction method based on collective filtering |
CN116612493A (en) * | 2023-04-28 | 2023-08-18 | 深圳先进技术研究院 | Pedestrian geographic track extraction method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110956651B (en) | Terrain semantic perception method based on fusion of vision and vibrotactile sense | |
CN112541449A (en) | Pedestrian trajectory prediction method based on unmanned aerial vehicle aerial photography view angle | |
US11017550B2 (en) | End-to-end tracking of objects | |
US11860629B2 (en) | Sparse convolutional neural networks | |
Bhattacharyya et al. | Long-term on-board prediction of people in traffic scenes under uncertainty | |
Srikanth et al. | Infer: Intermediate representations for future prediction | |
Yudin et al. | Object detection with deep neural networks for reinforcement learning in the task of autonomous vehicles path planning at the intersection | |
US11731663B2 (en) | Systems and methods for actor motion forecasting within a surrounding environment of an autonomous vehicle | |
Sales et al. | Adaptive finite state machine based visual autonomous navigation system | |
CN110986945B (en) | Local navigation method and system based on semantic altitude map | |
JP2020123346A (en) | Method and device for performing seamless parameter switching by using location based algorithm selection to achieve optimized autonomous driving in each of regions | |
Yang et al. | PTPGC: Pedestrian trajectory prediction by graph attention network with ConvLSTM | |
CN117826795A (en) | Autonomous inspection method and system of underground pipe gallery inspection robot | |
CN115861383A (en) | Pedestrian trajectory prediction device and method based on multi-information fusion in crowded space | |
Roth et al. | Viplanner: Visual semantic imperative learning for local navigation | |
CN115272712A (en) | Pedestrian trajectory prediction method fusing moving target analysis | |
Karpyshev et al. | Mucaslam: Cnn-based frame quality assessment for mobile robot with omnidirectional visual slam | |
Xu et al. | Trajectory prediction for autonomous driving with topometric map | |
Zhang et al. | Research on the Application of Computer Vision Based on Deep Learning in Autonomous Driving Technology | |
Dudarenko et al. | Robot navigation system in stochastic environment based on reinforcement learning on lidar data | |
Khalil et al. | Integration of motion prediction with end-to-end latent RL for self-driving vehicles | |
CN114723782A (en) | Traffic scene moving object perception method based on different-pattern image learning | |
Wang et al. | Enhancing mapless trajectory prediction through knowledge distillation | |
CN115018883A (en) | Transmission line unmanned aerial vehicle infrared autonomous inspection method based on optical flow and Kalman filtering | |
Zürn et al. | Autograph: Predicting lane graphs from traffic observations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210323 |