WO2017185503A1 - 目标跟踪方法和装置 - Google Patents
目标跟踪方法和装置 Download PDFInfo
- Publication number
- WO2017185503A1 WO2017185503A1 PCT/CN2016/086303 CN2016086303W WO2017185503A1 WO 2017185503 A1 WO2017185503 A1 WO 2017185503A1 CN 2016086303 W CN2016086303 W CN 2016086303W WO 2017185503 A1 WO2017185503 A1 WO 2017185503A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- tracking
- target
- aerial vehicle
- unmanned aerial
- module
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Definitions
- the present invention relates to the field of visual tracking, and in particular, to a target tracking method and apparatus.
- An unmanned aerial vehicle also commonly referred to as a drone
- the current video tracking technology can no longer continue to track the target, which will cause the target to disappear and have a greater impact on the flight strategy of the drone.
- the technical problem to be solved by the present invention is how to control the drone to effectively track the target.
- a target tracking method including:
- the unmanned fly is The line sends a state adjustment control command to adjust the tracking shooting state of the unmanned aerial vehicle.
- the method further includes:
- the tracked learning target algorithm continues to track the tracked target.
- the method further includes:
- a state adjustment control command is sent to the unmanned aerial vehicle to adjust the unmanned driving
- the tracking status of the aircraft including:
- the tracking learning detection algorithm is processed by moving the graphics processing unit.
- the tracked learning target is tracked by using a tracking learning detection algorithm to determine whether the tracked target is in the shooting field of the unmanned aerial vehicle, including:
- the detecting module detects, in the current frame image, a plurality of image regions that are consistent with the tracked target feature according to the target model that has been trained;
- Tracking module in the captured video stream of the tracked target, tracking a motion state of the tracked target between consecutive frame images, and determining, according to the motion state, the detection module Determining, in the image area, a position of the tracked target in the current frame image to determine whether the tracked target is within a shooting field of the unmanned aerial vehicle;
- the learning module uses a PN learning algorithm to determine the latest training samples based on the results of the detection module and the tracking module, and updates the target model with the latest training samples.
- a target tracking device including:
- a tracking learning detecting unit configured to track the tracked target by using a tracking learning detection algorithm to determine whether the tracked target is in a shooting field of the unmanned aerial vehicle;
- an adjustment control unit configured to send a state adjustment control command to the unmanned aerial vehicle to adjust a tracking shooting state of the unmanned aerial vehicle in a case where the tracked target disappears from the shooting field of view.
- the tracking learning detecting unit is further configured to continue tracking the tracked target by using the tracking learning detection algorithm if the tracked target appears again in the shooting field of view.
- the method further includes:
- the adjustment control unit is further configured to determine that the current tracking failure is performed when a time interval in which the tracked target disappears from the photographing field of view exceeds a set time interval.
- the adjustment control unit includes:
- a rotation control module configured to send a rotation control command to the pan/tilt control module of the unmanned aerial vehicle to adjust a rotation angle of the pan/tilt of the unmanned aerial vehicle;
- a flight control module configured to send a flight control command to the flight control module of the unmanned aerial vehicle to adjust a flight motion of the unmanned aerial vehicle.
- the tracking learning detection algorithm is executed by moving the graphics processing unit.
- the tracking learning detection unit includes:
- a detecting module configured to detect, according to the target model that has been trained, a plurality of image regions that match the tracked target feature in the current frame image
- a tracking module configured to be connected to the detection module, to track, in the captured video stream of the tracked target, a motion state of the tracked object between consecutive frame images, and according to the motion state Determining, at a plurality of image regions determined by the detecting module, a position of the tracked target in the current frame image to determine whether the tracked target is within a shooting field of the unmanned aerial vehicle;
- a learning module respectively connected to the detecting module and the tracking module, configured to determine a latest training sample according to a result of the detecting module and the tracking module by using a PN learning algorithm, and update the target by using a latest training sample model.
- the tracking detection learning algorithm is carried on the control platform of the drone, and the real-time effective tracking of the shooting target by the drone can be realized. In the case where the target temporarily disappears, the drone can continue shooting, and if the target appears again, it can continue tracking.
- FIG. 1 is a schematic diagram showing a target tracking method according to an embodiment of the present invention
- FIG. 2 is a block diagram showing an TLD algorithm in a target tracking method according to an embodiment of the present invention
- FIG. 3 is a diagram showing an operation mechanism of a TLD algorithm in a target tracking method according to an embodiment of the present invention
- FIG. 4 is a diagram showing an example of the working principle of a learning module in a target tracking method according to an embodiment of the present invention
- FIG. 5 is a schematic diagram of a target tracking device according to an embodiment of the invention.
- FIG. 6 is a block diagram showing the structure of a target tracking device according to an embodiment of the present invention.
- FIG. 1 shows a schematic diagram of a target tracking method in accordance with an embodiment of the present invention.
- the target tracking method may mainly include:
- Step 101 Tracking the tracked target by using a Tracking-Learning-Detection (TLD) algorithm to determine whether the tracked target is in a shooting field of an unmanned aerial vehicle (referred to as an unmanned aerial vehicle);
- TLD Tracking-Learning-Detection
- Step 102 When the tracked target disappears from the shooting field of view, send a state adjustment control command to the unmanned aerial vehicle to adjust a tracking shooting state of the unmanned aerial vehicle.
- the target tracking method further includes:
- Step 103 If the tracked target appears again in the shooting field, the tracking learning detection algorithm is used to continue tracking the tracked target.
- the target tracking method further includes:
- Step 104 Exceeding a time interval in which the tracked target disappears from the shooting field of view In the case of a fixed time interval, it is determined that the current tracking has failed.
- step 101 includes:
- the detecting module detects, in the current frame image, a plurality of image regions that are consistent with the tracked target feature according to the target model that has been trained;
- Tracking module in the captured video stream of the tracked target, tracking a motion state of the tracked target between consecutive frame images, and determining, according to the motion state, the detection module Determining, in the image area, a position of the tracked target in the current frame image to determine whether the tracked target is within a shooting field of the unmanned aerial vehicle;
- the learning module uses a PN learning algorithm to determine the latest training samples based on the results of the detection module and the tracking module, and updates the target model with the latest training samples.
- step 102 includes:
- the TLD algorithm of this embodiment can be processed by a mobile graphics processing unit (GPU). Take advantage of the mobile GPU platform to achieve real-time results and increase speed and efficiency.
- the TLD algorithm will continue to retrieve the target based on the previously determined samples and the features generated by the learning.
- the algorithm will continue to track and re-train the current pose of the target, which in turn will improve the tracking accuracy.
- the user-selected target (the tracked target) is tracked in the video, and continuous learning is performed to obtain the latest appearance features of the target, thereby completing real-time tracking to achieve The best state. That is to say, at the beginning, only one frame of the stationary target image can be provided, but as the target moves continuously, the system can continuously detect and obtain the target. Marked in terms of angle, distance, depth of field, etc., and identified in real time, after a period of learning, the target can no longer escape.
- the TLD can adopt the overlapping block tracking strategy, and the single block tracking can use the Lucas-Kanade optical flow method.
- the TLD needs to specify the tracked target before tracking, for example, it can be marked by a rectangular box.
- the motion of the final overall target takes the median of all local block movements. This local tracking strategy can solve the problem of partial occlusion.
- the TLD algorithm generally consists of three parts: a tracking module 21, a detection module 22, and a learning module 23.
- the detection module 22 and the tracking module 21 perform complementary processing in parallel.
- the tracking module 21 assumes that the motion of the object between adjacent video frame images is limited and that the tracked target is visible, thereby estimating the motion of the target. If the tracked target disappears in the field of view of the drone's camera, this target tracking will fail.
- the detection module 22 assumes that each video frame image is independent of each other, and according to the previously detected and learned target model, if the tracking fails, a full map search is performed on each frame image to locate an area where the target may appear.
- the detection module in the TLD may also have errors, mainly including negative samples of the target image area and positive sample errors.
- the learning module evaluates the two errors of the detection module according to the result of the tracking module, and generates a training sample according to the evaluation result to update the target model of the detection module, and updates the “key feature point” of the tracking module to This will avoid similar errors in the future.
- it can be judged whether the tracked target is within the shooting field of the unmanned aerial vehicle. For example, the error amount of the positive sample or the negative sample is counted. If the set threshold is exceeded, the feature change of the tracked target in the frame image is very large compared with the previous frame, and it can be determined that the tracked target is not in the field of view.
- the learning module uses the PN learning algorithm to evaluate the process in which the tracking module 21 obtains the first image region and the second image region obtained by the detecting module 22, and is specifically described by using the following example.
- PN learning is a semi-supervised machine learning algorithm that provides two "experts" to correct two errors generated by the detection module when classifying samples: P-expert is used to detect missed detection (false Negative, positive samples are positively divided into positive samples; N-experts are used to correct positive samples of false positives (negative positives, positive negatives).
- the sample is generated by scanning the image line by line with different scanning grids, forming a bounding box at each position, and the image area determined by the bounding box is called an image.
- the patch, the image element enters the machine learning sample set becomes a sample.
- the sample produced by the scan is an unlabeled sample, which needs to be classified by a classifier to determine its label.
- the tracking module (or tracker) has determined the position of the object at the t+1 frame (actually determining the position of the corresponding bounding box, ie the bounding box where the target position is located), the slave detection module (or detector)
- the resulting enclosing frame filters out 10 bounding boxes closest to the bounding box where the target position is located (the feature difference is small) (the area of the intersection of the two bounding boxes divided by the area is greater than 0.7), for each
- the bounding box makes a tiny affine transformation (10% translation, 10% scaling, 10° rotation), producing 20 image elements, which produces 200 positive samples.
- the learning module updates the parameters of the classifier with the latest training set (ie, updates the target model).
- the role of the P expert is to find the temporal structure of the data. It uses the results of the tracking module (or tracker, or tracker) to predict the position of the object at the t+1 frame. If this position (the bounding box) is classified as negative by the detection module, the P expert changes this position to positive. That is to say, the P expert should ensure that the position where the object appears on consecutive frames can constitute a continuous trajectory;
- the role of the N expert is to find the spatial structure of the data. It compares all the positive samples generated by the detection module with the P experts, selects the most reliable position, and guarantees the most objects. It only appears in one location and uses this location as a trace of the TLD algorithm. This location is also used to reinitialize the tracking module.
- the target vehicle is the dark car below.
- the black frame in each frame is the positive sample detected by the detection module
- the white frame is the positive sample generated by the tracking module
- the asterisk is the last of each frame. Track the results.
- the detection module did not find a dark car, but the P expert considered that the dark car is also a positive sample according to the result of the tracking module. After comparing the N experts, it is considered that the sample of the dark car is more reliable, so the light-colored car is output. Is a negative sample.
- the process of the t+1th frame is similar to the tth frame. At the t+2 frame, the P expert produced the wrong result, but after the comparison by the N expert, the result was excluded and the algorithm could still track the correct vehicle.
- a rotation control command can be sent to the head of the drone to control the pan/tilt to rotate at a certain angle and continue shooting, so that the target may be captured again not far from the target disappearing position.
- the pan/tilt rotation angle may be determined according to a target motion state or a motion trajectory determined by the tracking module.
- the flight control command corresponding to which flight action is specifically transmitted may also be determined according to the target motion state or motion trajectory determined by the tracking module.
- the tracking module of the TLD will continue to track the target, and use the currently captured video stream to re-train the current posture of the target, thereby improving the accuracy of subsequent tracking.
- the TLD algorithm is used for video tracking, and tracking, detection, and identification can be combined, and real-time tracking of the target can be realized on a control platform that can be mounted on the drone, such as the TEGRA platform.
- a control platform that can be mounted on the drone
- the TEGRA platform due to the excellent parallel computing power of TEGRA GPU, the training and tracking of the algorithm can be made faster and faster.
- CUDA Computer Unified Device Architecture
- the TLD algorithm is used to track the target in the video, and the target can be continuously tracked when the target is occluded.
- the use of machine learning principles, and the advantages of mobile GPU parallel computing can improve computing efficiency and accuracy.
- FIG. 5 shows a schematic diagram of a target tracking device in accordance with an embodiment of the present invention.
- the target tracking device may mainly include:
- the tracking learning detecting unit 41 is configured to track the tracked target by using a tracking learning detection algorithm to determine whether the tracked target is in a shooting field of the unmanned aerial vehicle;
- the adjustment control unit 42 is connected to the tracking learning detecting unit 41, and configured to send a state adjustment control command to the unmanned aerial vehicle to adjust the location when the tracked target disappears from the shooting field of view The tracking shooting status of the unmanned aerial vehicle.
- the tracking learning detecting unit 41 is further configured to continue to perform the tracking target by using the tracking learning detection algorithm if the tracked target appears again in the shooting field of view. track.
- the adjustment control unit 42 is further configured to determine that the current tracking failure is performed when a time interval in which the tracked target disappears from the shooting field of view exceeds a set time interval.
- the adjustment control unit 42 includes:
- a rotation control module configured to send a rotation control command to the pan/tilt control module of the unmanned aerial vehicle to adjust a rotation angle of the pan/tilt of the unmanned aerial vehicle;
- a flight control module configured to send a flight control command to the flight control module of the unmanned aerial vehicle to adjust a flight motion of the unmanned aerial vehicle.
- the tracking learning detection algorithm moves a graphic processing Yuan to implement.
- the tracking learning detecting unit 41 includes:
- the detecting module 22 is configured to detect, in the current frame image, a plurality of image regions that are consistent with the tracked target feature according to the target model that has been trained;
- the tracking module 21 is connected to the detecting module 22, and is configured to track, in the captured video stream of the tracked target, a motion state of the tracked object between consecutive frame images, and according to the Determining, in a plurality of image regions determined by the detecting module, a position of the tracked target in the current frame image to determine whether the tracked target is within a shooting field of the unmanned aerial vehicle;
- the learning module 23 is respectively connected to the detecting module 22 and the tracking module 21 for determining the latest training samples according to the results of the detecting module 22 and the tracking module 21 by using the PN learning algorithm, and adopting the latest training.
- the sample updates the target model.
- the learning module 23 may update the training samples according to the results of the detecting module 22 and the tracking module 21, thereby updating the target model used by the detecting module. If it is determined that the tracked target is not within the field of view, a control command can be sent to the drone to cause the drone to continue shooting near the target disappearance position, thereby continuing to track the target when the target reappears.
- FIG. 6 is a block diagram showing the structure of a target tracking device according to an embodiment of the present invention.
- the target tracking device 1100 may be a host server having a computing capability, a personal computer PC, or a portable computer or terminal that can be carried.
- the specific embodiments of the present invention do not limit the specific implementation of the computing node.
- the target tracking device 1100 includes a processor 1110, a communications interface 1120, a memory 1130, and a bus 1140.
- the processor 1110, the communication interface 1120, and the memory 1130 complete communication with each other through the bus 1140.
- Communication interface 1120 is for communicating with network devices, including, for example, a virtual machine management center, shared storage, and the like.
- the processor 1110 is configured to execute a program.
- the processor 1110 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present invention.
- ASIC Application Specific Integrated Circuit
- the memory 1130 is used to store files.
- the memory 1130 may include a high speed RAM memory and may also include a non-volatile memory such as at least one disk memory.
- Memory 1130 can also be a memory array.
- the memory 1130 may also be partitioned, and the blocks may be combined into a virtual volume according to certain rules.
- the above program may be program code including computer operating instructions.
- the program is specifically applicable to: implementing the operations of the steps in the method embodiment.
- the function is implemented in the form of computer software and sold or used as a stand-alone product, it is considered to some extent that all or part of the technical solution of the present invention (for example, a part contributing to the prior art) is It is embodied in the form of computer software products.
- the computer software product is typically stored in a computer readable non-volatile storage medium, including instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform all of the methods of various embodiments of the present invention. Or part of the steps.
- the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
- the tracking detection learning algorithm is carried on the control platform of the drone, and the real-time effective tracking of the shooting target by the drone can be realized. In the case where the target temporarily disappears, the drone can continue shooting, and if the target appears again, it can continue tracking.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (12)
- 一种目标跟踪方法,其特征在于,包括:采用跟踪学习检测算法对被跟踪目标进行跟踪,以判断所述被跟踪目标是否处于无人驾驶飞行器的拍摄视野内;在所述被跟踪目标从所述拍摄视野内消失的情况下,向所述无人驾驶飞行器发送状态调整控制命令,以调整所述无人驾驶飞行器的跟踪拍摄状态。
- 根据权利要求1所述的方法,其特征在于,还包括:在所述被跟踪目标再次出现在所述拍摄视野内的情况下,采用所述跟踪学习检测算法继续对所述被跟踪目标进行跟踪。
- 根据权利要求1所述的方法,其特征在于,还包括:在所述被跟踪目标从所述拍摄视野内消失的时间间隔超出设定时间间隔的情况下,判定本次跟踪失败。
- 根据权利要求1至3中任一项所述的方法,其特征在于,在所述被跟踪目标从所述拍摄视野内消失的情况下,向所述无人驾驶飞行器发送状态调整控制命令,以调整所述无人驾驶飞行器的跟踪拍摄状态,包括:向所述无人驾驶飞行器的云台控制模块发送旋转控制命令,以调整所述无人驾驶飞行器的云台的旋转角度;或向所述无人驾驶飞行器的飞控模块发送飞行控制命令,调整所述无人驾驶飞行器的飞行动作。
- 根据权利要求1至3中任一项所述的方法,其特征在于,所述跟踪学习检测算法通过移动图形处理单元来处理。
- 根据权利要求1至3中任一项所述的方法,其特征在于,采用跟踪学习检测算法对所拍摄的被跟踪目标进行跟踪,以判断所述被跟踪目标是否处于无人驾驶飞行器的拍摄视野内,包括:检测模块根据已经训练得到的目标模型,在当前帧图像检测得到与所述被跟踪目标特征相符合的多个图像区域;跟踪模块在所拍摄的所述被跟踪目标的视频流中,跟踪得到所述被跟踪目标在连续的帧图像之间的运动状态,并根据所述运动状态在所述检测模块所确定的多个图像区域中确定所述被跟踪目标在所述当前帧图像的位置,以判断所述被跟踪目标是否处于所述无人驾驶飞行器的拍摄视野内;学习模块采用PN学习算法根据所述检测模块与所述跟踪模块的结果确定最新的训练样本,并采用最新的训练样本更新所述目标模型。
- 一种目标跟踪装置,其特征在于,包括:跟踪学习检测单元,用于采用跟踪学习检测算法对被跟踪目标进行跟踪,以判断所述被跟踪目标是否处于无人驾驶飞行器的拍摄视野内;调整控制单元,用于在所述被跟踪目标从所述拍摄视野内消失的情况下,向所述无人驾驶飞行器发送状态调整控制命令,以调整所述无人驾驶飞行器的跟踪拍摄状态。
- 根据权利要求7所述的装置,其特征在于,所述跟踪学习检测单元还用于在所述被跟踪目标再次出现在所述拍摄视野内的情况下,采用所述跟踪学习检测算法继续对所述被跟踪目标进行跟踪。
- 根据权利要求7所述的装置,其特征在于,还包括:所述调整控制单元还用于在所述被跟踪目标从所述拍摄视野内消失的时间间隔超出设定时间间隔的情况下,判定本次跟踪失败。
- 根据权利要求7至9中任一项所述的装置,其特征在于,所述调整控制单元包括:旋转控制模块,用于向所述无人驾驶飞行器的云台控制模块发送旋转控制命令,以调整所述无人驾驶飞行器的云台的旋转角度;和/或飞行控制模块,用于向所述无人驾驶飞行器的飞控模块发送飞行控制命令,调整所述无人驾驶飞行器的飞行动作。
- 根据权利要求7至9中任一项所述的装置,其特征在于,所述跟踪学习检测算法通过移动图形处理单元来执行。
- 根据权利要求7至9中任一项所述的装置,其特征在于,所述跟踪学习检测单元包括:检测模块,用于根据已经训练得到的目标模型,在当前帧图像检测得到与所述被跟踪目标特征相符合的多个图像区域;跟踪模块,与所述检测模块连接,用于在所拍摄的所述被跟踪目标的视频流中,跟踪得到所述被跟踪目标在连续的帧图像之间的运动状态,并根据所述运动状态在所述检测模块所确定的多个图像区域中确定所述被跟踪目标在所述当前帧图像的位置,以判断所述被跟踪目标是否处于所述无人驾驶飞行器的拍摄视野内;学习模块,与所述检测模块和所述跟踪模块分别连接,用于采用PN学习算法根据所述检测模块与所述跟踪模块的结果确定最新的训练样本,并采用最新的训练样本更新所述目标模型。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610282383.5 | 2016-04-29 | ||
CN201610282383.5A CN105957109A (zh) | 2016-04-29 | 2016-04-29 | 目标跟踪方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017185503A1 true WO2017185503A1 (zh) | 2017-11-02 |
Family
ID=56913162
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/086303 WO2017185503A1 (zh) | 2016-04-29 | 2016-06-17 | 目标跟踪方法和装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105957109A (zh) |
WO (1) | WO2017185503A1 (zh) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107909024A (zh) * | 2017-11-13 | 2018-04-13 | 哈尔滨理工大学 | 基于图像识别和红外避障的车辆跟踪系统、方法及车辆 |
CN107967692A (zh) * | 2017-11-28 | 2018-04-27 | 西安电子科技大学 | 一种基于跟踪学习检测的目标跟踪优化方法 |
CN108447079A (zh) * | 2018-03-12 | 2018-08-24 | 中国计量大学 | 一种基于tld算法框架的目标跟踪方法 |
CN110362095A (zh) * | 2019-08-09 | 2019-10-22 | 大连海事大学 | 一种有限时间收敛无人船协同控制器的设计方法 |
CN111127509A (zh) * | 2018-10-31 | 2020-05-08 | 杭州海康威视数字技术股份有限公司 | 目标跟踪方法、装置和计算机可读存储介质 |
CN111784737A (zh) * | 2020-06-10 | 2020-10-16 | 中国人民解放军军事科学院国防科技创新研究院 | 一种基于无人机平台的目标自动跟踪方法及系统 |
CN111932588A (zh) * | 2020-08-07 | 2020-11-13 | 浙江大学 | 一种基于深度学习的机载无人机多目标跟踪系统的跟踪方法 |
CN111986230A (zh) * | 2019-05-23 | 2020-11-24 | 北京地平线机器人技术研发有限公司 | 一种视频中目标物的姿态跟踪方法及装置 |
CN112102365A (zh) * | 2020-09-23 | 2020-12-18 | 烟台艾睿光电科技有限公司 | 一种基于无人机吊舱的目标跟踪方法及相关装置 |
CN112233141A (zh) * | 2020-09-28 | 2021-01-15 | 国网浙江省电力有限公司杭州供电公司 | 电力场景下基于无人机视觉的运动目标追踪方法及系统 |
CN112365527A (zh) * | 2020-10-15 | 2021-02-12 | 中标慧安信息技术股份有限公司 | 园区内车辆跨镜追踪方法及系统 |
CN113096156A (zh) * | 2021-04-23 | 2021-07-09 | 中国科学技术大学 | 面向自动驾驶的端到端实时三维多目标追踪方法及装置 |
CN113449566A (zh) * | 2020-03-27 | 2021-09-28 | 北京机械设备研究所 | 人在回路的“低慢小”目标智能图像跟踪方法及系统 |
CN114556904A (zh) * | 2020-12-30 | 2022-05-27 | 深圳市大疆创新科技有限公司 | 云台系统的控制方法、控制设备、云台系统和存储介质 |
CN115865939A (zh) * | 2022-11-08 | 2023-03-28 | 燕山大学 | 一种基于边云协同决策的目标检测与追踪系统及方法 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106454108B (zh) * | 2016-11-04 | 2019-05-03 | 北京百度网讯科技有限公司 | 基于人工智能的跟踪拍摄方法、装置和电子设备 |
CN106774398A (zh) * | 2016-12-20 | 2017-05-31 | 北京小米移动软件有限公司 | 航拍方法及装置、无人机 |
CN108537726B (zh) * | 2017-03-03 | 2022-01-04 | 杭州海康威视数字技术股份有限公司 | 一种跟踪拍摄的方法、设备和无人机 |
US10720672B2 (en) | 2017-04-24 | 2020-07-21 | Autel Robotics Co., Ltd | Series-multiple battery pack management system |
EP3534250B1 (en) | 2018-01-18 | 2021-09-15 | Autel Robotics Co., Ltd. | Target detection method and unmanned aerial vehicle |
CN108577980A (zh) * | 2018-02-08 | 2018-09-28 | 南方医科大学南方医院 | 一种对超声刀头进行自动跟踪的方法、系统及装置 |
CN110310300B (zh) * | 2018-03-20 | 2023-09-08 | 腾讯科技(深圳)有限公司 | 一种虚拟环境中的目标跟随拍摄方法及装置、电子设备 |
CN109190676B (zh) * | 2018-08-06 | 2022-11-08 | 百度在线网络技术(北京)有限公司 | 一种用于图像识别的模型训练方法、装置、设备及存储介质 |
CN109785661A (zh) * | 2019-02-01 | 2019-05-21 | 广东工业大学 | 一种基于机器学习的停车引导方法 |
CN117237406A (zh) * | 2022-06-08 | 2023-12-15 | 珠海一微半导体股份有限公司 | 一种机器人视觉跟踪方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060077255A1 (en) * | 2004-08-10 | 2006-04-13 | Hui Cheng | Method and system for performing adaptive image acquisition |
CN1953547A (zh) * | 2006-09-21 | 2007-04-25 | 上海大学 | 无人飞行器对地面移动目标的低空跟踪系统及方法 |
CN102355574A (zh) * | 2011-10-17 | 2012-02-15 | 上海大学 | 机载云台运动目标自主跟踪系统的图像稳定方法 |
CN103838244A (zh) * | 2014-03-20 | 2014-06-04 | 湖南大学 | 基于四轴飞行器的便携式目标跟踪方法及系统 |
CN105279773A (zh) * | 2015-10-27 | 2016-01-27 | 杭州电子科技大学 | 一种基于tld框架的改进型视频跟踪优化方法 |
CN105487552A (zh) * | 2016-01-07 | 2016-04-13 | 深圳一电航空技术有限公司 | 无人机跟踪拍摄的方法及装置 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103149939B (zh) * | 2013-02-26 | 2015-10-21 | 北京航空航天大学 | 一种基于视觉的无人机动态目标跟踪与定位方法 |
CN104408725B (zh) * | 2014-11-28 | 2017-07-04 | 中国航天时代电子公司 | 一种基于tld优化算法的目标重捕获系统及方法 |
CN105424006B (zh) * | 2015-11-02 | 2017-11-24 | 国网山东省电力公司电力科学研究院 | 基于双目视觉的无人机悬停精度测量方法 |
-
2016
- 2016-04-29 CN CN201610282383.5A patent/CN105957109A/zh active Pending
- 2016-06-17 WO PCT/CN2016/086303 patent/WO2017185503A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060077255A1 (en) * | 2004-08-10 | 2006-04-13 | Hui Cheng | Method and system for performing adaptive image acquisition |
CN1953547A (zh) * | 2006-09-21 | 2007-04-25 | 上海大学 | 无人飞行器对地面移动目标的低空跟踪系统及方法 |
CN102355574A (zh) * | 2011-10-17 | 2012-02-15 | 上海大学 | 机载云台运动目标自主跟踪系统的图像稳定方法 |
CN103838244A (zh) * | 2014-03-20 | 2014-06-04 | 湖南大学 | 基于四轴飞行器的便携式目标跟踪方法及系统 |
CN105279773A (zh) * | 2015-10-27 | 2016-01-27 | 杭州电子科技大学 | 一种基于tld框架的改进型视频跟踪优化方法 |
CN105487552A (zh) * | 2016-01-07 | 2016-04-13 | 深圳一电航空技术有限公司 | 无人机跟踪拍摄的方法及装置 |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107909024A (zh) * | 2017-11-13 | 2018-04-13 | 哈尔滨理工大学 | 基于图像识别和红外避障的车辆跟踪系统、方法及车辆 |
CN107909024B (zh) * | 2017-11-13 | 2021-11-05 | 哈尔滨理工大学 | 基于图像识别和红外避障的车辆跟踪系统、方法及车辆 |
CN107967692A (zh) * | 2017-11-28 | 2018-04-27 | 西安电子科技大学 | 一种基于跟踪学习检测的目标跟踪优化方法 |
CN108447079A (zh) * | 2018-03-12 | 2018-08-24 | 中国计量大学 | 一种基于tld算法框架的目标跟踪方法 |
CN111127509A (zh) * | 2018-10-31 | 2020-05-08 | 杭州海康威视数字技术股份有限公司 | 目标跟踪方法、装置和计算机可读存储介质 |
CN111127509B (zh) * | 2018-10-31 | 2023-09-01 | 杭州海康威视数字技术股份有限公司 | 目标跟踪方法、装置和计算机可读存储介质 |
CN111986230A (zh) * | 2019-05-23 | 2020-11-24 | 北京地平线机器人技术研发有限公司 | 一种视频中目标物的姿态跟踪方法及装置 |
CN110362095A (zh) * | 2019-08-09 | 2019-10-22 | 大连海事大学 | 一种有限时间收敛无人船协同控制器的设计方法 |
CN110362095B (zh) * | 2019-08-09 | 2022-04-01 | 大连海事大学 | 一种有限时间收敛无人船协同控制器的设计方法 |
CN113449566A (zh) * | 2020-03-27 | 2021-09-28 | 北京机械设备研究所 | 人在回路的“低慢小”目标智能图像跟踪方法及系统 |
CN113449566B (zh) * | 2020-03-27 | 2024-05-07 | 北京机械设备研究所 | 人在回路的“低慢小”目标智能图像跟踪方法及系统 |
CN111784737A (zh) * | 2020-06-10 | 2020-10-16 | 中国人民解放军军事科学院国防科技创新研究院 | 一种基于无人机平台的目标自动跟踪方法及系统 |
CN111932588A (zh) * | 2020-08-07 | 2020-11-13 | 浙江大学 | 一种基于深度学习的机载无人机多目标跟踪系统的跟踪方法 |
CN111932588B (zh) * | 2020-08-07 | 2024-01-30 | 浙江大学 | 一种基于深度学习的机载无人机多目标跟踪系统的跟踪方法 |
CN112102365A (zh) * | 2020-09-23 | 2020-12-18 | 烟台艾睿光电科技有限公司 | 一种基于无人机吊舱的目标跟踪方法及相关装置 |
CN112102365B (zh) * | 2020-09-23 | 2024-05-31 | 烟台艾睿光电科技有限公司 | 一种基于无人机吊舱的目标跟踪方法及相关装置 |
CN112233141B (zh) * | 2020-09-28 | 2022-10-14 | 国网浙江省电力有限公司杭州供电公司 | 电力场景下基于无人机视觉的运动目标追踪方法及系统 |
CN112233141A (zh) * | 2020-09-28 | 2021-01-15 | 国网浙江省电力有限公司杭州供电公司 | 电力场景下基于无人机视觉的运动目标追踪方法及系统 |
CN112365527A (zh) * | 2020-10-15 | 2021-02-12 | 中标慧安信息技术股份有限公司 | 园区内车辆跨镜追踪方法及系统 |
CN114556904A (zh) * | 2020-12-30 | 2022-05-27 | 深圳市大疆创新科技有限公司 | 云台系统的控制方法、控制设备、云台系统和存储介质 |
CN113096156A (zh) * | 2021-04-23 | 2021-07-09 | 中国科学技术大学 | 面向自动驾驶的端到端实时三维多目标追踪方法及装置 |
CN113096156B (zh) * | 2021-04-23 | 2024-05-24 | 中国科学技术大学 | 面向自动驾驶的端到端实时三维多目标追踪方法及装置 |
CN115865939A (zh) * | 2022-11-08 | 2023-03-28 | 燕山大学 | 一种基于边云协同决策的目标检测与追踪系统及方法 |
CN115865939B (zh) * | 2022-11-08 | 2024-05-10 | 燕山大学 | 一种基于边云协同决策的目标检测与追踪系统及方法 |
Also Published As
Publication number | Publication date |
---|---|
CN105957109A (zh) | 2016-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017185503A1 (zh) | 目标跟踪方法和装置 | |
US10818028B2 (en) | Detecting objects in crowds using geometric context | |
US11645765B2 (en) | Real-time visual object tracking for unmanned aerial vehicles (UAVs) | |
Wang et al. | Development of UAV-based target tracking and recognition systems | |
US10510157B2 (en) | Method and apparatus for real-time face-tracking and face-pose-selection on embedded vision systems | |
KR102126513B1 (ko) | 카메라의 포즈를 판단하기 위한 장치 및 방법 | |
US9760791B2 (en) | Method and system for object tracking | |
Huang et al. | Bridging the gap between detection and tracking: A unified approach | |
Kart et al. | How to make an rgbd tracker? | |
Bagautdinov et al. | Probability occupancy maps for occluded depth images | |
JP2018522348A (ja) | センサーの3次元姿勢を推定する方法及びシステム | |
US11508157B2 (en) | Device and method of objective identification and driving assistance device | |
JP7272024B2 (ja) | 物体追跡装置、監視システムおよび物体追跡方法 | |
US11810311B2 (en) | Two-stage depth estimation machine learning algorithm and spherical warping layer for equi-rectangular projection stereo matching | |
Xing et al. | DE‐SLAM: SLAM for highly dynamic environment | |
CN112184757A (zh) | 运动轨迹的确定方法及装置、存储介质、电子装置 | |
CN112562159B (zh) | 一种门禁控制方法、装置、计算机设备和存储介质 | |
Haggui et al. | Human detection in moving fisheye camera using an improved YOLOv3 framework | |
Wang et al. | Object as query: Lifting any 2d object detector to 3d detection | |
Zhang et al. | A novel efficient method for abnormal face detection in ATM | |
Le et al. | Human detection and tracking for autonomous human-following quadcopter | |
Jain et al. | Fusion-driven deep feature network for enhanced object detection and tracking in video surveillance systems | |
Xing et al. | Computationally efficient RGB-t UAV detection and tracking system | |
CN117294831B (zh) | 时间校准方法、装置、计算机设备、存储介质 | |
Fan et al. | PSiamRML: Target Recognition and Matching Integrated Localization Algorithm Based on Pseudo‐Siamese Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16900013 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16900013 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20/03/2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16900013 Country of ref document: EP Kind code of ref document: A1 |