CN110516613A - A kind of pedestrian track prediction technique under first visual angle - Google Patents

A kind of pedestrian track prediction technique under first visual angle Download PDF

Info

Publication number
CN110516613A
CN110516613A CN201910807214.2A CN201910807214A CN110516613A CN 110516613 A CN110516613 A CN 110516613A CN 201910807214 A CN201910807214 A CN 201910807214A CN 110516613 A CN110516613 A CN 110516613A
Authority
CN
China
Prior art keywords
pedestrian
camera
layer
network
convld
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910807214.2A
Other languages
Chinese (zh)
Other versions
CN110516613B (en
Inventor
刘洪波
李伯林
江同棒
张博
汪大峰
戴光耀
李科
林正奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN201910807214.2A priority Critical patent/CN110516613B/en
Publication of CN110516613A publication Critical patent/CN110516613A/en
Application granted granted Critical
Publication of CN110516613B publication Critical patent/CN110516613B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses the pedestrian track prediction techniques under a kind of first visual angle, and pedestrian track strategy under the first visual angle is predicted using encoding and decoding structure combination cyclic convolution network.The feature vector for the pedestrian track information that original image is obtained by coding, is then decoded feature vector, predicts the trace information of following pedestrian.In common data sets and oneself collected data set, the present invention can accurately predict the trace information of 10 frame of future of multiple pedestrians, L2 range error between final prediction locus and final actual path is increased to 40, improves 30 pixel precisions than existing method.The invention proposes the space-time convolution loop network methods of prediction pedestrian track, carry out encoding and decoding processing using one-dimensional convolution, are predicted by space-time convolutional network, in current correlation technique, realize that relatively simple, data acquisition and processing are clear, succinct, practical.

Description

A kind of pedestrian track prediction technique under first visual angle
Technical field
A kind of pedestrian track prediction side the present invention relates to pedestrian track prediction technique, under especially a kind of first visual angle Method.
Background technique
In today that automatic Pilot and robot technology are booming, vehicle-periphery letter is obtained by vehicle-mounted camera Breath predicts pedestrian track information in video, controls vehicle drive behavior, makes more reasonable path planning to carry out obstacle Object, pedestrian evade, and are a highly important tasks.
Non- first visual angle, as the trajectory predictions of pedestrian under monitoring camera need not consider the movement of camera itself for row The influence of people's trajectory predictions, for example, monitoring camera in front pedestrian detection frame in video it is increasing show pedestrian move towards camera shooting Head.But the video of the establishing shots such as monitor video is distinguished at the first visual angle, and the movement of robot or photographer itself directly affect view The acquisition and prediction of pedestrian information in frequency.First visual angle belongs to movement visual angle, and photographer itself is also moving, and this movement can shadow The judgement of the future behaviour for pedestrian is rung, for example under the first visual angle, pedestrian is increasing, then cannot determine that pedestrian is To cam movement or camera close to pedestrian, pedestrian track prediction is also inaccuracy.
Summary of the invention
To solve the above problems existing in the prior art, the present invention will propose a kind of rail that can promote pedestrian under the first visual angle Pedestrian track prediction technique under first visual angle of mark precision of prediction.
Thinking of the invention is: the model based on encoding and decoding structure, by introducing pedestrian position information, self motion history The track of information and cyclic convolution network prediction pedestrian's future, and effectively promoted by the way that the Future Information that self is moved is added The precision of pedestrian's Future Trajectory Prediction in video.
To achieve the goals above, technical scheme is as follows: the pedestrian track prediction side under the first visual angle of one kind Method, comprising the following steps:
A, network encoder encodes to obtain track characteristic
A1, pedestrian head is worn or handheld motion camera, obtains the video of the admission under the first visual angle in real time;
A2, video is divided into several width images according to the frame per second of k frame per second, the range of k is 5~20;
A3, the image by having divided in processor processing step A2 obtain pedestrian position feature vector by following steps:
A31, pedestrian in image is marked by marking tool, marks pedestrian detection frame;
A32, the pedestrian detection frame marked in step A31 is corrected by time window sampling algorithm.Due to image Coordinate origin is incremented by from left to right in the upper left corner of image, horizontal axis coordinate x value in space, and ordinate of orthogonal axes y value is incremented by from top to bottom, So taking pedestrian detection frame upper left position information (xi min, yi min)TAnd lower right position information (xi max, yi max)TAs pedestrian Track data.Using the track sets of all pedestrians included by continuous n frame as one group of training sample, the range of n is 10~20, The training sample of each pedestrian is denoted as Lin:
Wherein, li=(xi min, yi min, xi max, yi max)∈R4, the value range of i is tcurrent-This~tcurrent。 tcurrentFor current time, ThisIndicate historical frames range, ThisValue is 5~20.
A33, building pedestrian position feature extraction convolutional network handle pedestrian position and detection block size to obtain pedestrian position Set feature vector Lin F:
Lin F=(lf1.., lfm),
Wherein, lfiIndicate the ith feature value of pedestrian position feature.
The pedestrian position feature extraction convolutional network structure uses 4 layers of structure, first input data LinInput first layer The one-dimensional convolutional layer of Conv1d, the one-dimensional convolutional layer output result of first layer Conv1d input the one-dimensional convolutional layer of second layer Convld, the The one-dimensional convolutional layer output result of two layers of Conv1d inputs the one-dimensional convolutional layer of third layer Convld, the one-dimensional convolutional layer of third layer Conv1d Output result inputs the 4th layer of one-dimensional convolutional layer of Conv1d, and the one-dimensional convolutional layer output result of the 4th layer of Conv1d obtains feature vector Lin F;Every layer of output result all carries out BN batch normalized and activates by Relu activation primitive.
A4, self motion history feature vector of camera is obtained;
A41, camera shooting of the current frame image relative to previous frame image is obtained by Structure From Motion algorithm Self motion information of head.Self motion information of the camera includes the Eulerian angles r of camera itself rotation informationt∈R3And speed Spend information vt∈R3.The Eulerian angles include yaw angle ψ, roll angle Φ and pitching angle theta, the velocity information include camera i.e. Projection v of the Shi Sudu on 3-D walls and floorx, vy, vz.Self motion history feature vector of camera is denoted as EH
Wherein, et=(rt T, vt T)T∈R6, the value range of t is tcurrent-This~tcurrent
4 layers of A42, building camera self motion history feature extraction convolutional network, extract self motion history of camera The feature of information EH obtains self motion history feature vector E of cameraH F
EH F=(ef1..., efn)
Wherein, efiIndicate the ith feature value of self motion history feature of camera.
Self motion history feature extraction convolutional network structure of the camera uses pedestrian position feature extraction convolution net The identical structure of network.
A5, by Lin FAnd EH FTwo vectors join end to end and link together, and obtain feature vector LEF:
LEF=(lf1.., lfm, ef1..., efn)
A6, self movement future features vector of camera is obtained;
A61, using method identical with step A41, Flying Camera is obtained by Structure From Motion algorithm Self following T of movement of headfutureFrame motion information, range are 5~20, indicate Flying Camera head future intend go to where, note Make EFur
Self movement future features of A62, building camera extract convolutional network CNN, extract self following movement of camera Information EFurFeature, obtain camera self movement future features vector EFur F:
EFur F=(eff1..., effn)
Wherein, effiIndicate the ith feature value of self movement future features of camera.
Self following motion feature of the camera extracts convolutional network structure, and self movement is gone through with the camera of step A42 History feature extraction convolutional network structure is identical.
B, network decoder decoding prediction pedestrian's Future Trajectory
It is special to the pedestrian position information characteristics and self motion information of network encoder output in network decoder structure Sign is decoded, and obtains pedestrian's Future Trajectory by deconvolution network.In order to improve precision of prediction, addition can represent future trend Flying Camera head itself future motion information, the specific steps are as follows:
B1, building standard cycle neural network RNN, unit number n;
B2, by feature vector LEFAnd EFur FAs two inputs of Recognition with Recurrent Neural Network RNN, it is pre- for obtaining the output of network Sequencing column Lout
B3, building decoding track sets deconvolution network;
It decodes track sets deconvolution network structure and uses 4 layers of structure, first by forecasting sequence LoutInput first layer Convld One-dimensional convolutional layer, the one-dimensional convolutional layer output result of first layer Conv1d input the one-dimensional convolutional layer of second layer Conv1d, the second layer The one-dimensional convolutional layer output result of Conv1d inputs the one-dimensional convolutional layer of third layer Conv1d, and the output result of three first layers is all first to carry out BN batch normalized is simultaneously activated by Relu activation primitive.It is finally that the one-dimensional convolutional layer output result of third layer Convld is defeated Enter the 4th layer of one-dimensional convolutional layer of Conv1d.
B4, to forecasting sequence LoutThe deconvolution network that input step B3 is built obtains the detection block size in pedestrian's future Information and trace information Lpre:
Wherein, li=(xi min, yi min, xi max, yi max)∈R4, the value range of i is tcurrent+ 1~tcurrent+Tfuture, tcurrentFor current time, TfutureFor the following frame number of prediction, range is 5~20.
Compared with prior art, the invention has the following advantages:
1, the present invention predicts pedestrian track strategy under the first visual angle using encoding and decoding structure combination cyclic convolution network.It is former Feature vector of the beginning image by the step A pedestrian track information encoded, is decoded feature vector, in advance in stepb Measure the trace information of following pedestrian.In common data sets and oneself collected data set, the present invention can be accurate Predict the trace information of 10 frame of future of multiple pedestrians, the L2 range error between final prediction locus and final actual path 40 are increased to, improves 30 pixel precisions than existing method.
2, the invention proposes the space-time convolution loop network method of prediction pedestrian track, volume solution is carried out using one-dimensional convolution Code processing, is predicted by space-time convolutional network, in current correlation technique, realization is relatively simple, data acquisition and processing are clear, Succinctly, practical.
Detailed description of the invention
The present invention shares attached drawing 6 and opens, in which:
Fig. 1 is the image after marking tool label.
Fig. 2 is that t is marked0The history and Future Positions and detection block information of moment pedestrian.
Fig. 3 is t0The history and Future Positions and detection block information of+10 moment pedestrians.
Fig. 4 is flow chart of the invention.
Fig. 5 is convolutional network structure chart used in the present invention.
Fig. 6 is deconvolution network structure used in the present invention.
Specific embodiment
The present invention is further described through with reference to the accompanying drawing.According to process shown in Fig. 4 to the first multi-view image It is calculated, obtains camera image during exercise with Flying Camera head first, predicted n frame picture as the first visual angle pedestrian Original image.Step A31 according to the invention is handled the image after being marked to original image, as shown in Figure 1.This Place needs to correct label result according to the precision of marking tool.
Step A, B according to the invention obtains trajectory predictions result.In order to intuitively show prediction effect, by prediction locus, Real trace and historical track are tagged on image.It is assumed that Fig. 2 is t0The image at moment is identified with triangle on Fig. 2 and is marked Remember pedestrian in t010 seconds trajectory predictions results after moment identify the t for marking the pedestrian with quadrangle star0After moment 10 seconds it is true Real rail identifies label t with diamond shape010 seconds real history tracks before moment, as shown in Figure 2.Fig. 3 is t0The image at+10 moment. It can see by comparison diagram 2 and Fig. 3, the t that Fig. 2 intermediate cam shape mark represents0Moment pedestrian track prediction result and diamond shape mark It is consistent to know the future represented true pedestrian track traveling trend, and two trajectory coordinates point deviation very littles.The box in Fig. 3 Center is t0Actual position central point where the pedestrian at+10 moment, the point t in Fig. 20When the prediction locus that engraves in it is pre- It measures, i.e., Fig. 2 intermediate cam shape identifies one triangle point of track Far Left.Analysis prediction result can see according to the present invention Method can accurately at prediction pedestrian Future Trajectory.
The present invention is not limited to the present embodiment, any equivalent concepts within the technical scope of the present disclosure or changes Become, is classified as protection scope of the present invention.

Claims (1)

1. the pedestrian track prediction technique under a kind of first visual angle, it is characterised in that: the following steps are included:
A, network encoder encodes to obtain track characteristic
A1, pedestrian head is worn or handheld motion camera, obtains the video of the admission under the first visual angle in real time;
A2, video is divided into several width images according to the frame per second of k frame per second, the range of k is 5~20;
A3, the image by having divided in processor processing step A2 obtain pedestrian position feature vector by following steps:
A31, pedestrian in image is marked by marking tool, marks pedestrian detection frame;
A32, the pedestrian detection frame marked in step A31 is corrected by time window sampling algorithm;Due to image space Middle coordinate origin is incremented by from left to right in the upper left corner of image, horizontal axis coordinate x value, and ordinate of orthogonal axes y value is incremented by from top to bottom, so Take pedestrian detection frame upper left position information (xi min, yi min)TAnd lower right position information (xi max, yi max)TAs pedestrian track Data;Using the track sets of all pedestrians included by continuous n frame as one group of training sample, the range of n is 10~20, each The training sample of pedestrian is denoted as Lin:
Wherein, li=(xi min, yi min, xi max, yi max)∈R4, the value range of i is tcurrent-This~tcurrent;tcurrentFor Current time, ThisIndicate historical frames range, ThisValue is 5~20;
A33, building pedestrian position feature extraction convolutional network handle pedestrian position and detection block size to obtain pedestrian position spy Levy vector Lin F:
Lin F=(lf1..., lfm),
Wherein, lfiIndicate the ith feature value of pedestrian position feature;
The pedestrian position feature extraction convolutional network structure uses 4 layers of structure, first input data LinInput first layer The one-dimensional convolutional layer of Convld, the one-dimensional convolutional layer output result of first layer Convld input the one-dimensional convolutional layer of second layer Convld, the The one-dimensional convolutional layer output result of two layers of Convld inputs the one-dimensional convolutional layer of third layer Convld, the one-dimensional convolutional layer of third layer Convld Output result inputs the 4th layer of one-dimensional convolutional layer of Convld, and the one-dimensional convolutional layer output result of the 4th layer of Convld obtains feature vector Lin F;Every layer of output result all carries out BN batch normalized and activates by Relu activation primitive;
A4, self motion history feature vector of camera is obtained;
A41, by Structure From Motion algorithm obtain current frame image relative to previous frame image camera from My motion information;Self motion information of the camera includes the Eulerian angles r of camera itself rotation informationt∈R3Believe with speed Cease vt∈R3;The Eulerian angles include yaw angle ψ, roll angle φ and pitching angle theta, and the velocity information includes camera i.e. speed per hour Spend the projection v on 3-D walls and floorx, vy, vz;Self motion history feature vector of camera is denoted as EH
Wherein, et=(rt T, vt T)T∈R6, the value range of t is tcurrent-This~tcurrent
4 layers of A42, building camera self motion history feature extraction convolutional network, extract self motion history information E of cameraH Feature, obtain self motion history feature vector E of cameraH F
EH F=(ef1..., efn)
Wherein, efiIndicate the ith feature value of self motion history feature of camera;
Self motion history feature extraction convolutional network structure of the camera uses pedestrian position feature extraction convolutional network phase Same structure;
A5, by Lin FAnd EH FTwo vectors join end to end and link together, and obtain feature vector LEF:
LEF=(lf1..., lfm, ef1..., efn)
A6, self movement future features vector of camera is obtained;
A61, using method identical with step A41, by Structure From Motion algorithm obtain Flying Camera head from I moves following TfutureFrame motion information, indicate Flying Camera head future intend go to where, be denoted as EFur
Self movement future features of A62, building camera extract convolutional network CNN, extract self following motion information of camera EFurFeature, obtain camera self movement future features vector EFur F:
EFur F=(eff1..., effn)
Wherein, effiIndicate the ith feature value of self movement future features of camera;
Self following motion feature of the camera extracts convolutional network structure and self motion history of the camera of step A42 is special It is identical that sign extracts convolutional network structure;
B, network decoder decoding prediction pedestrian's Future Trajectory
In network decoder structure to network encoder output pedestrian position information characteristics and self motion information feature into Row decoding, obtains pedestrian's Future Trajectory by deconvolution network;In order to improve precision of prediction, the fortune that can represent future trend is added The future motion information of dynamic camera itself, the specific steps are as follows:
B1, building standard cycle neural network RNN, unit number n;
B2, by feature vector LEFAnd EFur FAs two inputs of Recognition with Recurrent Neural Network RNN, the output for obtaining network is pre- sequencing Arrange Lout
B3, building decoding track sets deconvolution network;
It decodes track sets deconvolution network structure and uses 4 layers of structure, first by forecasting sequence LoutIt is one-dimensional to input first layer Convld Convolutional layer, the one-dimensional convolutional layer output result of first layer Convld input the one-dimensional convolutional layer of second layer Convld, second layer Convld One-dimensional convolutional layer output result inputs the one-dimensional convolutional layer of third layer Convld, and the output result of three first layers is all first to carry out BN batch Normalized is simultaneously activated by Relu activation primitive;Finally the one-dimensional convolutional layer output result input the 4th of third layer Convld The layer one-dimensional convolutional layer of Convld;
B4, to forecasting sequence LoutThe deconvolution network that input step B3 is built obtains the detection block size information in pedestrian's future With trace information Lpre:
Wherein, li=(xi min, yi min, xi max, yi max)∈R4, the value range of i is tcurrent+ 1~tcurrent+Tfuture, tcurrentFor current time, TfutureFor the following frame number of prediction, range is 5~20.
CN201910807214.2A 2019-08-29 2019-08-29 Method for predicting pedestrian track at first view angle Active CN110516613B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910807214.2A CN110516613B (en) 2019-08-29 2019-08-29 Method for predicting pedestrian track at first view angle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910807214.2A CN110516613B (en) 2019-08-29 2019-08-29 Method for predicting pedestrian track at first view angle

Publications (2)

Publication Number Publication Date
CN110516613A true CN110516613A (en) 2019-11-29
CN110516613B CN110516613B (en) 2023-04-18

Family

ID=68629021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910807214.2A Active CN110516613B (en) 2019-08-29 2019-08-29 Method for predicting pedestrian track at first view angle

Country Status (1)

Country Link
CN (1) CN110516613B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114116944A (en) * 2021-11-30 2022-03-01 重庆七腾科技有限公司 Trajectory prediction method and device based on time attention convolution network
CN114581487A (en) * 2021-08-02 2022-06-03 北京易航远智科技有限公司 Pedestrian trajectory prediction method and device, electronic equipment and computer program product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160379074A1 (en) * 2015-06-25 2016-12-29 Appropolis Inc. System and a method for tracking mobile objects using cameras and tag devices
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN109063581A (en) * 2017-10-20 2018-12-21 奥瞳系统科技有限公司 Enhanced Face datection and face tracking method and system for limited resources embedded vision system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160379074A1 (en) * 2015-06-25 2016-12-29 Appropolis Inc. System and a method for tracking mobile objects using cameras and tag devices
CN109063581A (en) * 2017-10-20 2018-12-21 奥瞳系统科技有限公司 Enhanced Face datection and face tracking method and system for limited resources embedded vision system
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JO SUNYOUNG 等: ""Doppler Channel Series Prediction Using Recurrent Neural Networks"" *
张德正 等: ""基于深度卷积长短时神经网络的视频帧预测"" *
韩昭蓉 等: ""基于Bi-LSTM模型的轨迹异常点检测算法"" *
高玄 等: ""基于图像处理的人群行为识别方法综述"" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581487A (en) * 2021-08-02 2022-06-03 北京易航远智科技有限公司 Pedestrian trajectory prediction method and device, electronic equipment and computer program product
CN114581487B (en) * 2021-08-02 2022-11-25 北京易航远智科技有限公司 Pedestrian trajectory prediction method, device, electronic equipment and computer program product
CN114116944A (en) * 2021-11-30 2022-03-01 重庆七腾科技有限公司 Trajectory prediction method and device based on time attention convolution network
CN114116944B (en) * 2021-11-30 2024-06-11 重庆七腾科技有限公司 Track prediction method and device based on time attention convolution network

Also Published As

Publication number Publication date
CN110516613B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN110480634B (en) Arm guide motion control method for mechanical arm motion control
CN111339867B (en) Pedestrian trajectory prediction method based on generation of countermeasure network
CN109800689B (en) Target tracking method based on space-time feature fusion learning
US20230316742A1 (en) Image processing method, apparatus and device, and computer-readable storage medium
CN109635793A (en) A kind of unmanned pedestrian track prediction technique based on convolutional neural networks
Breitenmoser et al. A monocular vision-based system for 6D relative robot localization
CN108051002A (en) Transport vehicle space-location method and system based on inertia measurement auxiliary vision
CN106200657B (en) A kind of unmanned aerial vehicle (UAV) control method
CN109598242A (en) A kind of novel biopsy method
CN110516613A (en) A kind of pedestrian track prediction technique under first visual angle
CN110543917B (en) Indoor map matching method by utilizing pedestrian inertial navigation track and video information
CN106780631A (en) A kind of robot closed loop detection method based on deep learning
CN112734808A (en) Trajectory prediction method for vulnerable road users in vehicle driving environment
CN112651374B (en) Future trajectory prediction method based on social information and automatic driving system
CN104700088A (en) Gesture track recognition method based on monocular vision motion shooting
CN105159452A (en) Control method and system based on estimation of human face posture
CN109508686A (en) A kind of Human bodys' response method based on the study of stratification proper subspace
CN113920254B (en) Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof
CN112785564B (en) Pedestrian detection tracking system and method based on mechanical arm
CN113674310B (en) Four-rotor unmanned aerial vehicle target tracking method based on active visual perception
CN102779268B (en) Hand swing motion direction judging method based on direction motion historigram and competition mechanism
Ma et al. Using RGB image as visual input for mapless robot navigation
CN116703968B (en) Visual tracking method, device, system, equipment and medium for target object
CN117831116A (en) Running event detection method based on large model distillation and electronic equipment
CN111126170A (en) Video dynamic object detection method based on target detection and tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant