WO2017185503A1 - 目标跟踪方法和装置 - Google Patents

目标跟踪方法和装置 Download PDF

Info

Publication number
WO2017185503A1
WO2017185503A1 PCT/CN2016/086303 CN2016086303W WO2017185503A1 WO 2017185503 A1 WO2017185503 A1 WO 2017185503A1 CN 2016086303 W CN2016086303 W CN 2016086303W WO 2017185503 A1 WO2017185503 A1 WO 2017185503A1
Authority
WO
WIPO (PCT)
Prior art keywords
tracking
target
aerial vehicle
unmanned aerial
module
Prior art date
Application number
PCT/CN2016/086303
Other languages
English (en)
French (fr)
Inventor
高鹏
Original Assignee
高鹏
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 高鹏 filed Critical 高鹏
Publication of WO2017185503A1 publication Critical patent/WO2017185503A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • the present invention relates to the field of visual tracking, and in particular, to a target tracking method and apparatus.
  • An unmanned aerial vehicle also commonly referred to as a drone
  • the current video tracking technology can no longer continue to track the target, which will cause the target to disappear and have a greater impact on the flight strategy of the drone.
  • the technical problem to be solved by the present invention is how to control the drone to effectively track the target.
  • a target tracking method including:
  • the unmanned fly is The line sends a state adjustment control command to adjust the tracking shooting state of the unmanned aerial vehicle.
  • the method further includes:
  • the tracked learning target algorithm continues to track the tracked target.
  • the method further includes:
  • a state adjustment control command is sent to the unmanned aerial vehicle to adjust the unmanned driving
  • the tracking status of the aircraft including:
  • the tracking learning detection algorithm is processed by moving the graphics processing unit.
  • the tracked learning target is tracked by using a tracking learning detection algorithm to determine whether the tracked target is in the shooting field of the unmanned aerial vehicle, including:
  • the detecting module detects, in the current frame image, a plurality of image regions that are consistent with the tracked target feature according to the target model that has been trained;
  • Tracking module in the captured video stream of the tracked target, tracking a motion state of the tracked target between consecutive frame images, and determining, according to the motion state, the detection module Determining, in the image area, a position of the tracked target in the current frame image to determine whether the tracked target is within a shooting field of the unmanned aerial vehicle;
  • the learning module uses a PN learning algorithm to determine the latest training samples based on the results of the detection module and the tracking module, and updates the target model with the latest training samples.
  • a target tracking device including:
  • a tracking learning detecting unit configured to track the tracked target by using a tracking learning detection algorithm to determine whether the tracked target is in a shooting field of the unmanned aerial vehicle;
  • an adjustment control unit configured to send a state adjustment control command to the unmanned aerial vehicle to adjust a tracking shooting state of the unmanned aerial vehicle in a case where the tracked target disappears from the shooting field of view.
  • the tracking learning detecting unit is further configured to continue tracking the tracked target by using the tracking learning detection algorithm if the tracked target appears again in the shooting field of view.
  • the method further includes:
  • the adjustment control unit is further configured to determine that the current tracking failure is performed when a time interval in which the tracked target disappears from the photographing field of view exceeds a set time interval.
  • the adjustment control unit includes:
  • a rotation control module configured to send a rotation control command to the pan/tilt control module of the unmanned aerial vehicle to adjust a rotation angle of the pan/tilt of the unmanned aerial vehicle;
  • a flight control module configured to send a flight control command to the flight control module of the unmanned aerial vehicle to adjust a flight motion of the unmanned aerial vehicle.
  • the tracking learning detection algorithm is executed by moving the graphics processing unit.
  • the tracking learning detection unit includes:
  • a detecting module configured to detect, according to the target model that has been trained, a plurality of image regions that match the tracked target feature in the current frame image
  • a tracking module configured to be connected to the detection module, to track, in the captured video stream of the tracked target, a motion state of the tracked object between consecutive frame images, and according to the motion state Determining, at a plurality of image regions determined by the detecting module, a position of the tracked target in the current frame image to determine whether the tracked target is within a shooting field of the unmanned aerial vehicle;
  • a learning module respectively connected to the detecting module and the tracking module, configured to determine a latest training sample according to a result of the detecting module and the tracking module by using a PN learning algorithm, and update the target by using a latest training sample model.
  • the tracking detection learning algorithm is carried on the control platform of the drone, and the real-time effective tracking of the shooting target by the drone can be realized. In the case where the target temporarily disappears, the drone can continue shooting, and if the target appears again, it can continue tracking.
  • FIG. 1 is a schematic diagram showing a target tracking method according to an embodiment of the present invention
  • FIG. 2 is a block diagram showing an TLD algorithm in a target tracking method according to an embodiment of the present invention
  • FIG. 3 is a diagram showing an operation mechanism of a TLD algorithm in a target tracking method according to an embodiment of the present invention
  • FIG. 4 is a diagram showing an example of the working principle of a learning module in a target tracking method according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of a target tracking device according to an embodiment of the invention.
  • FIG. 6 is a block diagram showing the structure of a target tracking device according to an embodiment of the present invention.
  • FIG. 1 shows a schematic diagram of a target tracking method in accordance with an embodiment of the present invention.
  • the target tracking method may mainly include:
  • Step 101 Tracking the tracked target by using a Tracking-Learning-Detection (TLD) algorithm to determine whether the tracked target is in a shooting field of an unmanned aerial vehicle (referred to as an unmanned aerial vehicle);
  • TLD Tracking-Learning-Detection
  • Step 102 When the tracked target disappears from the shooting field of view, send a state adjustment control command to the unmanned aerial vehicle to adjust a tracking shooting state of the unmanned aerial vehicle.
  • the target tracking method further includes:
  • Step 103 If the tracked target appears again in the shooting field, the tracking learning detection algorithm is used to continue tracking the tracked target.
  • the target tracking method further includes:
  • Step 104 Exceeding a time interval in which the tracked target disappears from the shooting field of view In the case of a fixed time interval, it is determined that the current tracking has failed.
  • step 101 includes:
  • the detecting module detects, in the current frame image, a plurality of image regions that are consistent with the tracked target feature according to the target model that has been trained;
  • Tracking module in the captured video stream of the tracked target, tracking a motion state of the tracked target between consecutive frame images, and determining, according to the motion state, the detection module Determining, in the image area, a position of the tracked target in the current frame image to determine whether the tracked target is within a shooting field of the unmanned aerial vehicle;
  • the learning module uses a PN learning algorithm to determine the latest training samples based on the results of the detection module and the tracking module, and updates the target model with the latest training samples.
  • step 102 includes:
  • the TLD algorithm of this embodiment can be processed by a mobile graphics processing unit (GPU). Take advantage of the mobile GPU platform to achieve real-time results and increase speed and efficiency.
  • the TLD algorithm will continue to retrieve the target based on the previously determined samples and the features generated by the learning.
  • the algorithm will continue to track and re-train the current pose of the target, which in turn will improve the tracking accuracy.
  • the user-selected target (the tracked target) is tracked in the video, and continuous learning is performed to obtain the latest appearance features of the target, thereby completing real-time tracking to achieve The best state. That is to say, at the beginning, only one frame of the stationary target image can be provided, but as the target moves continuously, the system can continuously detect and obtain the target. Marked in terms of angle, distance, depth of field, etc., and identified in real time, after a period of learning, the target can no longer escape.
  • the TLD can adopt the overlapping block tracking strategy, and the single block tracking can use the Lucas-Kanade optical flow method.
  • the TLD needs to specify the tracked target before tracking, for example, it can be marked by a rectangular box.
  • the motion of the final overall target takes the median of all local block movements. This local tracking strategy can solve the problem of partial occlusion.
  • the TLD algorithm generally consists of three parts: a tracking module 21, a detection module 22, and a learning module 23.
  • the detection module 22 and the tracking module 21 perform complementary processing in parallel.
  • the tracking module 21 assumes that the motion of the object between adjacent video frame images is limited and that the tracked target is visible, thereby estimating the motion of the target. If the tracked target disappears in the field of view of the drone's camera, this target tracking will fail.
  • the detection module 22 assumes that each video frame image is independent of each other, and according to the previously detected and learned target model, if the tracking fails, a full map search is performed on each frame image to locate an area where the target may appear.
  • the detection module in the TLD may also have errors, mainly including negative samples of the target image area and positive sample errors.
  • the learning module evaluates the two errors of the detection module according to the result of the tracking module, and generates a training sample according to the evaluation result to update the target model of the detection module, and updates the “key feature point” of the tracking module to This will avoid similar errors in the future.
  • it can be judged whether the tracked target is within the shooting field of the unmanned aerial vehicle. For example, the error amount of the positive sample or the negative sample is counted. If the set threshold is exceeded, the feature change of the tracked target in the frame image is very large compared with the previous frame, and it can be determined that the tracked target is not in the field of view.
  • the learning module uses the PN learning algorithm to evaluate the process in which the tracking module 21 obtains the first image region and the second image region obtained by the detecting module 22, and is specifically described by using the following example.
  • PN learning is a semi-supervised machine learning algorithm that provides two "experts" to correct two errors generated by the detection module when classifying samples: P-expert is used to detect missed detection (false Negative, positive samples are positively divided into positive samples; N-experts are used to correct positive samples of false positives (negative positives, positive negatives).
  • the sample is generated by scanning the image line by line with different scanning grids, forming a bounding box at each position, and the image area determined by the bounding box is called an image.
  • the patch, the image element enters the machine learning sample set becomes a sample.
  • the sample produced by the scan is an unlabeled sample, which needs to be classified by a classifier to determine its label.
  • the tracking module (or tracker) has determined the position of the object at the t+1 frame (actually determining the position of the corresponding bounding box, ie the bounding box where the target position is located), the slave detection module (or detector)
  • the resulting enclosing frame filters out 10 bounding boxes closest to the bounding box where the target position is located (the feature difference is small) (the area of the intersection of the two bounding boxes divided by the area is greater than 0.7), for each
  • the bounding box makes a tiny affine transformation (10% translation, 10% scaling, 10° rotation), producing 20 image elements, which produces 200 positive samples.
  • the learning module updates the parameters of the classifier with the latest training set (ie, updates the target model).
  • the role of the P expert is to find the temporal structure of the data. It uses the results of the tracking module (or tracker, or tracker) to predict the position of the object at the t+1 frame. If this position (the bounding box) is classified as negative by the detection module, the P expert changes this position to positive. That is to say, the P expert should ensure that the position where the object appears on consecutive frames can constitute a continuous trajectory;
  • the role of the N expert is to find the spatial structure of the data. It compares all the positive samples generated by the detection module with the P experts, selects the most reliable position, and guarantees the most objects. It only appears in one location and uses this location as a trace of the TLD algorithm. This location is also used to reinitialize the tracking module.
  • the target vehicle is the dark car below.
  • the black frame in each frame is the positive sample detected by the detection module
  • the white frame is the positive sample generated by the tracking module
  • the asterisk is the last of each frame. Track the results.
  • the detection module did not find a dark car, but the P expert considered that the dark car is also a positive sample according to the result of the tracking module. After comparing the N experts, it is considered that the sample of the dark car is more reliable, so the light-colored car is output. Is a negative sample.
  • the process of the t+1th frame is similar to the tth frame. At the t+2 frame, the P expert produced the wrong result, but after the comparison by the N expert, the result was excluded and the algorithm could still track the correct vehicle.
  • a rotation control command can be sent to the head of the drone to control the pan/tilt to rotate at a certain angle and continue shooting, so that the target may be captured again not far from the target disappearing position.
  • the pan/tilt rotation angle may be determined according to a target motion state or a motion trajectory determined by the tracking module.
  • the flight control command corresponding to which flight action is specifically transmitted may also be determined according to the target motion state or motion trajectory determined by the tracking module.
  • the tracking module of the TLD will continue to track the target, and use the currently captured video stream to re-train the current posture of the target, thereby improving the accuracy of subsequent tracking.
  • the TLD algorithm is used for video tracking, and tracking, detection, and identification can be combined, and real-time tracking of the target can be realized on a control platform that can be mounted on the drone, such as the TEGRA platform.
  • a control platform that can be mounted on the drone
  • the TEGRA platform due to the excellent parallel computing power of TEGRA GPU, the training and tracking of the algorithm can be made faster and faster.
  • CUDA Computer Unified Device Architecture
  • the TLD algorithm is used to track the target in the video, and the target can be continuously tracked when the target is occluded.
  • the use of machine learning principles, and the advantages of mobile GPU parallel computing can improve computing efficiency and accuracy.
  • FIG. 5 shows a schematic diagram of a target tracking device in accordance with an embodiment of the present invention.
  • the target tracking device may mainly include:
  • the tracking learning detecting unit 41 is configured to track the tracked target by using a tracking learning detection algorithm to determine whether the tracked target is in a shooting field of the unmanned aerial vehicle;
  • the adjustment control unit 42 is connected to the tracking learning detecting unit 41, and configured to send a state adjustment control command to the unmanned aerial vehicle to adjust the location when the tracked target disappears from the shooting field of view The tracking shooting status of the unmanned aerial vehicle.
  • the tracking learning detecting unit 41 is further configured to continue to perform the tracking target by using the tracking learning detection algorithm if the tracked target appears again in the shooting field of view. track.
  • the adjustment control unit 42 is further configured to determine that the current tracking failure is performed when a time interval in which the tracked target disappears from the shooting field of view exceeds a set time interval.
  • the adjustment control unit 42 includes:
  • a rotation control module configured to send a rotation control command to the pan/tilt control module of the unmanned aerial vehicle to adjust a rotation angle of the pan/tilt of the unmanned aerial vehicle;
  • a flight control module configured to send a flight control command to the flight control module of the unmanned aerial vehicle to adjust a flight motion of the unmanned aerial vehicle.
  • the tracking learning detection algorithm moves a graphic processing Yuan to implement.
  • the tracking learning detecting unit 41 includes:
  • the detecting module 22 is configured to detect, in the current frame image, a plurality of image regions that are consistent with the tracked target feature according to the target model that has been trained;
  • the tracking module 21 is connected to the detecting module 22, and is configured to track, in the captured video stream of the tracked target, a motion state of the tracked object between consecutive frame images, and according to the Determining, in a plurality of image regions determined by the detecting module, a position of the tracked target in the current frame image to determine whether the tracked target is within a shooting field of the unmanned aerial vehicle;
  • the learning module 23 is respectively connected to the detecting module 22 and the tracking module 21 for determining the latest training samples according to the results of the detecting module 22 and the tracking module 21 by using the PN learning algorithm, and adopting the latest training.
  • the sample updates the target model.
  • the learning module 23 may update the training samples according to the results of the detecting module 22 and the tracking module 21, thereby updating the target model used by the detecting module. If it is determined that the tracked target is not within the field of view, a control command can be sent to the drone to cause the drone to continue shooting near the target disappearance position, thereby continuing to track the target when the target reappears.
  • FIG. 6 is a block diagram showing the structure of a target tracking device according to an embodiment of the present invention.
  • the target tracking device 1100 may be a host server having a computing capability, a personal computer PC, or a portable computer or terminal that can be carried.
  • the specific embodiments of the present invention do not limit the specific implementation of the computing node.
  • the target tracking device 1100 includes a processor 1110, a communications interface 1120, a memory 1130, and a bus 1140.
  • the processor 1110, the communication interface 1120, and the memory 1130 complete communication with each other through the bus 1140.
  • Communication interface 1120 is for communicating with network devices, including, for example, a virtual machine management center, shared storage, and the like.
  • the processor 1110 is configured to execute a program.
  • the processor 1110 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present invention.
  • ASIC Application Specific Integrated Circuit
  • the memory 1130 is used to store files.
  • the memory 1130 may include a high speed RAM memory and may also include a non-volatile memory such as at least one disk memory.
  • Memory 1130 can also be a memory array.
  • the memory 1130 may also be partitioned, and the blocks may be combined into a virtual volume according to certain rules.
  • the above program may be program code including computer operating instructions.
  • the program is specifically applicable to: implementing the operations of the steps in the method embodiment.
  • the function is implemented in the form of computer software and sold or used as a stand-alone product, it is considered to some extent that all or part of the technical solution of the present invention (for example, a part contributing to the prior art) is It is embodied in the form of computer software products.
  • the computer software product is typically stored in a computer readable non-volatile storage medium, including instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform all of the methods of various embodiments of the present invention. Or part of the steps.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • the tracking detection learning algorithm is carried on the control platform of the drone, and the real-time effective tracking of the shooting target by the drone can be realized. In the case where the target temporarily disappears, the drone can continue shooting, and if the target appears again, it can continue tracking.

Abstract

一种目标跟踪方法和装置,其中,该目标跟踪方法包括:采用跟踪学习检测算法对被跟踪目标进行跟踪,以判断所述被跟踪目标是否处于无人驾驶飞行器的拍摄视野内(101);在所述被跟踪目标从所述拍摄视野内消失的情况下,向所述无人驾驶飞行器发送状态调整控制命令,以调整所述无人驾驶飞行器的跟踪拍摄状态(102)。将跟踪检测学习算法搭载到无人机的控制平台,能够实现无人机对拍摄目标的实时有效跟踪。

Description

目标跟踪方法和装置
交叉引用
本申请主张2016年4月29日提交的中国专利申请号为201610282383.5的优先权,其全部内容通过引用包含于此。
技术领域
本发明涉及视觉跟踪领域,尤其涉及一种目标跟踪方法和装置。
背景技术
无人驾驶飞行器(通常也简称为无人机)在飞行过程中,可以根据用户在手持客户端上面选定目标,然后根据跟踪目标,给出无人机飞行策略。
目前,当目标被树木,房屋,彩旗等物体遮挡后再出现时候,采用现在的视频追踪技术无法再继续追踪目标,会导致目标消失,并且对无人机的飞行策略产生较大的影响。
发明内容
技术问题
有鉴于此,本发明要解决的技术问题是,如何控制无人机有效跟踪目标。
解决方案
为了解决上述技术问题,根据本发明的一实施例,提供了一种目标跟踪方法,包括:
采用跟踪学习检测算法对被跟踪目标进行跟踪,以判断所述被跟踪目标是否处于无人驾驶飞行器的拍摄视野内;
在所述被跟踪目标从所述拍摄视野内消失的情况下,向所述无人驾驶飞 行器发送状态调整控制命令,以调整所述无人驾驶飞行器的跟踪拍摄状态。
对于上述方法,在一种可能的实现方式中,还包括:
在所述被跟踪目标再次出现在所述拍摄视野内的情况下,采用所述跟踪学习检测算法继续对所述被跟踪目标进行跟踪。
对于上述方法,在一种可能的实现方式中,还包括:
在所述被跟踪目标从所述拍摄视野内消失的时间间隔超出设定时间间隔的情况下,判定本次跟踪失败。
对于上述方法,在一种可能的实现方式中,在所述被跟踪目标从所述拍摄视野内消失的情况下,向所述无人驾驶飞行器发送状态调整控制命令,以调整所述无人驾驶飞行器的跟踪拍摄状态,包括:
向所述无人驾驶飞行器的云台控制模块发送旋转控制命令,以调整所述无人驾驶飞行器的云台的旋转角度;或
向所述无人驾驶飞行器的飞控模块发送飞行控制命令,调整所述无人驾驶飞行器的飞行动作。
对于上述方法,在一种可能的实现方式中,所述跟踪学习检测算法通过移动图形处理单元来处理。
对于上述方法,在一种可能的实现方式中,采用跟踪学习检测算法对所拍摄的被跟踪目标进行跟踪,以判断所述被跟踪目标是否处于无人驾驶飞行器的拍摄视野内,包括:
检测模块根据已经训练得到的目标模型,在当前帧图像检测得到与所述被跟踪目标特征相符合的多个图像区域;
跟踪模块在所拍摄的所述被跟踪目标的视频流中,跟踪得到所述被跟踪目标在连续的帧图像之间的运动状态,并根据所述运动状态在所述检测模块所确定的多个图像区域中确定所述被跟踪目标在所述当前帧图像的位置,以判断所述被跟踪目标是否处于所述无人驾驶飞行器的拍摄视野内;
学习模块采用PN学习算法根据所述检测模块与所述跟踪模块的结果确定最新的训练样本,并采用最新的训练样本更新所述目标模型。
根据本发明的另一实施例,还提供了一种目标跟踪装置,包括:
跟踪学习检测单元,用于采用跟踪学习检测算法对被跟踪目标进行跟踪,以判断所述被跟踪目标是否处于无人驾驶飞行器的拍摄视野内;
调整控制单元,用于在所述被跟踪目标从所述拍摄视野内消失的情况下,向所述无人驾驶飞行器发送状态调整控制命令,以调整所述无人驾驶飞行器的跟踪拍摄状态。
对于上述装置,在一种可能的实现方式中,
所述跟踪学习检测单元还用于在所述被跟踪目标再次出现在所述拍摄视野内的情况下,采用所述跟踪学习检测算法继续对所述被跟踪目标进行跟踪。
对于上述装置,在一种可能的实现方式中,还包括:
所述调整控制单元还用于在所述被跟踪目标从所述拍摄视野内消失的时间间隔超出设定时间间隔的情况下,判定本次跟踪失败。
对于上述装置,在一种可能的实现方式中,所述调整控制单元包括:
旋转控制模块,用于向所述无人驾驶飞行器的云台控制模块发送旋转控制命令,以调整所述无人驾驶飞行器的云台的旋转角度;和/或
飞行控制模块,用于向所述无人驾驶飞行器的飞控模块发送飞行控制命令,调整所述无人驾驶飞行器的飞行动作。
对于上述装置,在一种可能的实现方式中,所述跟踪学习检测算法通过移动图形处理单元来执行。
对于上述装置,在一种可能的实现方式中,所述跟踪学习检测单元包括:
检测模块,用于根据已经训练得到的目标模型,在当前帧图像检测得到与所述被跟踪目标特征相符合的多个图像区域;
跟踪模块,与所述检测模块连接,用于在所拍摄的所述被跟踪目标的视频流中,跟踪得到所述被跟踪目标在连续的帧图像之间的运动状态,并根据所述运动状态在所述检测模块所确定的多个图像区域中确定所述被跟踪目标在所述当前帧图像的位置,以判断所述被跟踪目标是否处于所述无人驾驶飞行器的拍摄视野内;
学习模块,与所述检测模块和所述跟踪模块分别连接,用于采用PN学习算法根据所述检测模块与所述跟踪模块的结果确定最新的训练样本,并采用最新的训练样本更新所述目标模型。
有益效果
将跟踪检测学习算法搭载到无人机的控制平台,能够实现无人机对拍摄目标的实时有效跟踪。在目标暂时消失的情况下,无人机仍能继续拍摄,如果目标再次出现,则可以继续跟踪。
根据下面参考附图对示例性实施例的详细说明,本发明的其它特征及方面将变得清楚。
附图说明
包含在说明书中并且构成说明书的一部分的附图与说明书一起示出了本发明的示例性实施例、特征和方面,并且用于解释本发明的原理。
图1示出根据本发明一实施例的目标跟踪方法的示意图;
图2示出根据本发明一实施例的目标跟踪方法中TLD算法的架构图;
图3示出根据本发明一实施例的目标跟踪方法中TLD算法的运行机制图;
图4示出根据本发明一实施例的目标跟踪方法中学习模块的工作原理示例图;
图5示出根据本发明一实施例的目标跟踪装置的示意图;
图6示出根据本发明一实施例的目标跟踪设备的结构框图。
具体实施方式
以下将参考附图详细说明本发明的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
另外,为了更好的说明本发明,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本发明同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本发明的主旨。
图1示出根据本发明一实施例的目标跟踪方法的示意图。如图1所示,该目标跟踪方法主要可以包括:
步骤101、采用跟踪学习检测(Tracking-Learning-Detection,TLD)算法对被跟踪目标进行跟踪,以判断所述被跟踪目标是否处于无人驾驶飞行器(简称无人机)的拍摄视野内;
步骤102、在所述被跟踪目标从所述拍摄视野内消失的情况下,向所述无人驾驶飞行器发送状态调整控制命令,以调整所述无人驾驶飞行器的跟踪拍摄状态。
在一种可能的实现方式中,该目标跟踪方法还包括:
步骤103、在所述被跟踪目标再次出现在所述拍摄视野内的情况下,采用所述跟踪学习检测算法继续对所述被跟踪目标进行跟踪。
在一种可能的实现方式中,该目标跟踪方法还包括:
步骤104、在所述被跟踪目标从所述拍摄视野内消失的时间间隔超出设 定时间间隔的情况下,判定本次跟踪失败。
在一种可能的实现方式中,步骤101包括:
检测模块根据已经训练得到的目标模型,在当前帧图像检测得到与所述被跟踪目标特征相符合的多个图像区域;
跟踪模块在所拍摄的所述被跟踪目标的视频流中,跟踪得到所述被跟踪目标在连续的帧图像之间的运动状态,并根据所述运动状态在所述检测模块所确定的多个图像区域中确定所述被跟踪目标在所述当前帧图像的位置,以判断所述被跟踪目标是否处于所述无人驾驶飞行器的拍摄视野内;
学习模块采用PN学习算法根据所述检测模块与所述跟踪模块的结果确定最新的训练样本,并采用最新的训练样本更新所述目标模型。
在一种可能的实现方式中,步骤102包括:
向所述无人驾驶飞行器的云台控制模块发送旋转控制命令,以调整所述无人驾驶飞行器的云台的旋转角度;或
向所述无人驾驶飞行器的飞控模块发送飞行控制命令,调整所述无人驾驶飞行器的飞行动作。
具体而言,本实施例的TLD算法可以通过移动图形处理单元(Graphics Processing Unit,GPU)来处理。利用移动GPU平台的优势,达到实时的效果,提高运行速度和效率。当目标被遮挡或是消失在当前画面里面,采用TLD算法会根据之前确定的样本,经过学习所生成的特征继续对目标进行检索。当目标重新出现在视野里面时,本算法会继续跟踪,并对目标当前的姿态重新进行训练学习,继而提高跟踪准确率。
例如,在无人机的Tegra x1平台上面对用户选定的目标(被跟踪目标)在视频中进行跟踪,并进行不断的学习,以获取目标最新的外观特征,从而完成实时跟踪,以达到最佳的状态。也就是说,开始时可以只提供一帧静止的目标图像,但随着目标的不断运动,系统能持续不断地进行探测,获知目 标在角度、距离、景深等方面的改变,并实时识别,经过一段时间的学习之后,目标就再也无法躲过。
其中,TLD可以采用重叠块跟踪策略,单块跟踪可以使用Lucas-Kanade光流法。TLD在跟踪前需要指定被跟踪目标,例如可以由一个矩形框标出。最终整体目标的运动取所有局部块移动的中值,这种局部跟踪策略可以解决局部遮挡的问题。
具体而言,如图2所示,TLD算法通常由三部分组成:跟踪模块21、检测模块22、学习模块23。
如图3所示,TLD算法的详细运行机制为:
检测模块22和跟踪模块21互补干涉的并行进行处理。首先,跟踪模块21假设相邻的视频帧图像之间物体的运动是有限的,且被跟踪目标是可见的,以此来估计目标的运动。如果被跟踪目标在无人机的相机的拍摄视野中消失,将造成本次目标跟踪失败。检测模块22假设每一个视频帧图像都是彼此独立的,并且根据以往检测和学习到的目标模型,如果跟踪失败,则对每一帧图像进行全图搜索以定位目标可能出现的区域。其中,TLD中的检测模块也有可能出现错误,主要包括对目标图像区域的负样本的错误和正样本的错误。而学习模块则根据跟踪模块的结果对检测模块的这两种错误进行评估,并根据评估结果生成训练样本对检测模块的目标模型进行更新,同时对跟踪模块的“关键特征点”进行更新,以此来避免以后出现类似的错误。此外,根据评估结果还可以判断被跟踪目标是否处于无人驾驶飞行器的拍摄视野内。例如,统计正样本或负样本的错误量,如果超出设定阈值,则说明帧图像中被跟踪目标的特征变化与上一帧相比非常多,可以判定被跟踪目标不在拍摄视野内。
其中,学习模块采用PN学习算法对跟踪模块21得到第一图像区域与检测模块22得到的第二图像区域进行评估的过程,采用以下示例进行具体说明。
P-N学习(P-N Learning)。P-N学习是一种半监督的机器学习算法,它针对检测模块对样本分类时产生的两种错误提供了两种“专家”进行纠正:P专家(P-expert)用于检出漏检(false negative,正样本误分为负样本)的正样本;N专家(N-expert)用于改正误检(false positive,负样本误分为正样本)的正样本。
其中,样本的产生方法为:用不同尺寸的扫描窗(scanning grid)对图像进行逐行扫描,每在一个位置就形成一个包围框(bounding box),包围框所确定的图像区域称为一个图像元(patch),图像元进入机器学习的样本集就成为一个样本。扫描产生的样本是未标签样本,需要用分类器来分类,确定它的标签。
如果跟踪模块(或称为跟踪器)已经确定物体在t+1帧的位置(实际上是确定了相应包围框的位置,即目标位置所在的包围框),从检测模块(或称为检测器)产生的多个包围框中筛选出10个与目标位置所在的包围框距离最近(特征差异较小)的包围框(两个包围框的交的面积除以并的面积大于0.7),对每个包围框做微小的仿射变换(平移10%、缩放10%、旋转10°以内),产生20个图像元,这样就产生200个正样本。再选出若干与目标位置所在的包围框距离较远(特征差异较大)的包围框(交的面积除以并的面积小于0.2),产生负样本。这样产生的样本是已标签的样本,把这些样本放入训练集,学习模块用最新的训练集来更新分类器的参数(也即更新目标模型)。
P专家的作用是寻找数据在时间上的结构性,它利用跟踪模块(或称为追踪器、或跟踪器)的结果预测物体在t+1帧的位置。如果这个位置(包围框)被检测模块分类为负,P专家就把这个位置改为正。也就是说P专家要保证物体在连续帧上出现的位置可以构成连续的轨迹;
N专家的作用是寻找数据在空间上的结构性,它把检测模块产生的和P专家产生的所有正样本进行比较,选择出一个最可信的位置,保证物体最多 只出现在一个位置上,把这个位置作为TLD算法的追踪结果。同时这个位置也用来重新初始化跟踪模块。
如图4所示,目标车辆是下面的深色车,每一帧中黑色框是检测模块检测到的正样本,白色框是追踪模块产生的正样本,星号标记的是每一帧最后的追踪结果。在第t帧,检测模块没有发现深色车,但P专家根据跟踪模块的结果认为深色车也是正样本,N专家经过比较,认为深色车的样本更可信,所以把浅色车输出为负样本。第t+1帧的过程与第t帧类似。第t+2帧时,P专家产生了错误的结果,但经过N专家的比较,又把这个结果排除了,算法仍然可以追踪到正确的车辆。
进一步地,当目标被遮挡或是消失在当前帧图像内(也即从拍摄视野消失),则TLD的检测模块会根据之前确定的样本,以及经过学习所生成的特征继续对目标进行检测。在此过程中,可以向无人机的云台发送旋转控制命令,控制云台旋转一定角度,并继续拍摄,从而可能在目标消失位置的不远处,再次拍摄到该目标。其中,云台旋转角度,可以根据跟踪模块所确定的目标运动状态或运动轨迹来确定。当然,也可以向飞控模块发送飞行控制命令,来调整无人机的飞行动作,并继续拍摄,以有利于再次拍摄到该目标。具体发送与什么飞行动作对应的飞行控制命令,也可以根据跟踪模块所确定的目标运动状态或运动轨迹来确定。
当目标重新出现在无人机的拍摄视野里面时,TLD的跟踪模块会继续跟踪目标,并利用当前拍摄的视频流对目标当前的姿态重新进行训练学习,继而提高后续跟踪的准确率。
本实施例采用TLD算法进行视频跟踪,可以把跟踪、检测、识别结合到一起,在可以搭载到无人机的控制平台例如TEGRA平台上,实现目标的实时跟踪。其中,由于TEGRA GPU的出色并行运算能力,能够使算法的训练和跟踪更加的迅速快捷。例如,在TEGRA X1平台上面,能够充分利用CUDA (Compute Unified Device Architecture,统一计算设备架构)的多线程优势,将每一帧画面分解成多个块。分别对每个块进行检测与学习,充分利用平台优势,将这一算法发挥到极致。
本实施例根据用户选定的无人机所要跟踪的目标,采用TLD算法在视频中跟踪目标,能够在目标被遮挡时,继续追踪目标。此外,利用机器学习原理,并利用移动GPU并行计算的优势,能够提高运算效率和准确率。
图5示出根据本发明一实施例的目标跟踪装置的示意图。如图5所示,该目标跟踪装置主要可以包括:
跟踪学习检测单元41,用于采用跟踪学习检测算法对被跟踪目标进行跟踪,以判断所述被跟踪目标是否处于无人驾驶飞行器的拍摄视野内;
调整控制单元42,与所述跟踪学习检测单元41连接,用于在所述被跟踪目标从所述拍摄视野内消失的情况下,向所述无人驾驶飞行器发送状态调整控制命令,以调整所述无人驾驶飞行器的跟踪拍摄状态。
在一种可能的实现方式中,跟踪学习检测单元41还用于在所述被跟踪目标再次出现在所述拍摄视野内的情况下,采用所述跟踪学习检测算法继续对所述被跟踪目标进行跟踪。
在一种可能的实现方式中,所述调整控制单元42还用于在所述被跟踪目标从所述拍摄视野内消失的时间间隔超出设定时间间隔的情况下,判定本次跟踪失败。
在一种可能的实现方式中,所述调整控制单元42包括:
旋转控制模块,用于向所述无人驾驶飞行器的云台控制模块发送旋转控制命令,以调整所述无人驾驶飞行器的云台的旋转角度;和/或
飞行控制模块,用于向所述无人驾驶飞行器的飞控模块发送飞行控制命令,调整所述无人驾驶飞行器的飞行动作。
在一种可能的实现方式中,所述跟踪学习检测算法通过移动图形处理单 元来执行。
在一种可能的实现方式中,参见上一实施例中的图3,所述跟踪学习检测单元41包括:
检测模块22,用于根据已经训练得到的目标模型,在当前帧图像检测得到与所述被跟踪目标特征相符合的多个图像区域;
跟踪模块21,与所述检测模块22连接,用于在所拍摄的所述被跟踪目标的视频流中,跟踪得到所述被跟踪目标在连续的帧图像之间的运动状态,并根据所述运动状态在所述检测模块所确定的多个图像区域中确定所述被跟踪目标在所述当前帧图像的位置,以判断所述被跟踪目标是否处于所述无人驾驶飞行器的拍摄视野内;
学习模块23,与所述检测模块22和所述跟踪模块21分别连接,用于采用PN学习算法根据所述检测模块22与所述跟踪模块21的结果确定最新的训练样本,并采用最新的训练样本更新所述目标模型。
此外,学习模块23可以根据所述检测模块22与所述跟踪模块21的结果,更新训练样本,从而更新检测模块所使用的目标模型。如果判定被跟踪目标不处于拍摄视野内,则可以向无人机发送控制命令,以使得无人机对目标消失位置的附近继续拍摄,从而在目标再次出现时,以及继续跟踪到该目标。
其中,TLD算法的原理和具体示例,可以参见上一实施例中的相关描述。
图6示出根据本发明一实施例的目标跟踪设备的结构框图。所述目标跟踪设备1100可以是具备计算能力的主机服务器、个人计算机PC、或者可携带的便携式计算机或终端等。本发明具体实施例并不对计算节点的具体实现做限定。
所述目标跟踪设备1100包括处理器(processor)1110、通信接口(Communications Interface)1120、存储器(memory)1130和总线1140。其中,处理器1110、通信接口1120、以及存储器1130通过总线1140完成相互间的通信。
通信接口1120用于与网络设备通信,其中网络设备包括例如虚拟机管理中心、共享存储等。
处理器1110用于执行程序。处理器1110可能是一个中央处理器CPU,或者是专用集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。
存储器1130用于存放文件。存储器1130可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。存储器1130也可以是存储器阵列。存储器1130还可能被分块,并且所述块可按一定的规则组合成虚拟卷。
在一种可能的实施方式中,上述程序可为包括计算机操作指令的程序代码。该程序具体可用于:实现方法实施例中各步骤的操作。
本领域普通技术人员可以意识到,本文所描述的实施例中的各示例性单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件形式来实现,取决于技术方案的特定应用和设计约束条件。专业技术人员可以针对特定的应用选择不同的方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
如果以计算机软件的形式来实现所述功能并作为独立的产品销售或使用时,则在一定程度上可认为本发明的技术方案的全部或部分(例如对现有技术做出贡献的部分)是以计算机软件产品的形式体现的。该计算机软件产品通常存储在计算机可读取的非易失性存储介质中,包括若干指令用以使得计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各实施例方法的全部或部分步骤。而前述的存储介质包括U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限 于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。
实用性
将跟踪检测学习算法搭载到无人机的控制平台,能够实现无人机对拍摄目标的实时有效跟踪。在目标暂时消失的情况下,无人机仍能继续拍摄,如果目标再次出现,则可以继续跟踪。

Claims (12)

  1. 一种目标跟踪方法,其特征在于,包括:
    采用跟踪学习检测算法对被跟踪目标进行跟踪,以判断所述被跟踪目标是否处于无人驾驶飞行器的拍摄视野内;
    在所述被跟踪目标从所述拍摄视野内消失的情况下,向所述无人驾驶飞行器发送状态调整控制命令,以调整所述无人驾驶飞行器的跟踪拍摄状态。
  2. 根据权利要求1所述的方法,其特征在于,还包括:
    在所述被跟踪目标再次出现在所述拍摄视野内的情况下,采用所述跟踪学习检测算法继续对所述被跟踪目标进行跟踪。
  3. 根据权利要求1所述的方法,其特征在于,还包括:
    在所述被跟踪目标从所述拍摄视野内消失的时间间隔超出设定时间间隔的情况下,判定本次跟踪失败。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,在所述被跟踪目标从所述拍摄视野内消失的情况下,向所述无人驾驶飞行器发送状态调整控制命令,以调整所述无人驾驶飞行器的跟踪拍摄状态,包括:
    向所述无人驾驶飞行器的云台控制模块发送旋转控制命令,以调整所述无人驾驶飞行器的云台的旋转角度;或
    向所述无人驾驶飞行器的飞控模块发送飞行控制命令,调整所述无人驾驶飞行器的飞行动作。
  5. 根据权利要求1至3中任一项所述的方法,其特征在于,所述跟踪学习检测算法通过移动图形处理单元来处理。
  6. 根据权利要求1至3中任一项所述的方法,其特征在于,采用跟踪学习检测算法对所拍摄的被跟踪目标进行跟踪,以判断所述被跟踪目标是否处于无人驾驶飞行器的拍摄视野内,包括:
    检测模块根据已经训练得到的目标模型,在当前帧图像检测得到与所述被跟踪目标特征相符合的多个图像区域;
    跟踪模块在所拍摄的所述被跟踪目标的视频流中,跟踪得到所述被跟踪目标在连续的帧图像之间的运动状态,并根据所述运动状态在所述检测模块所确定的多个图像区域中确定所述被跟踪目标在所述当前帧图像的位置,以判断所述被跟踪目标是否处于所述无人驾驶飞行器的拍摄视野内;
    学习模块采用PN学习算法根据所述检测模块与所述跟踪模块的结果确定最新的训练样本,并采用最新的训练样本更新所述目标模型。
  7. 一种目标跟踪装置,其特征在于,包括:
    跟踪学习检测单元,用于采用跟踪学习检测算法对被跟踪目标进行跟踪,以判断所述被跟踪目标是否处于无人驾驶飞行器的拍摄视野内;
    调整控制单元,用于在所述被跟踪目标从所述拍摄视野内消失的情况下,向所述无人驾驶飞行器发送状态调整控制命令,以调整所述无人驾驶飞行器的跟踪拍摄状态。
  8. 根据权利要求7所述的装置,其特征在于,
    所述跟踪学习检测单元还用于在所述被跟踪目标再次出现在所述拍摄视野内的情况下,采用所述跟踪学习检测算法继续对所述被跟踪目标进行跟踪。
  9. 根据权利要求7所述的装置,其特征在于,还包括:
    所述调整控制单元还用于在所述被跟踪目标从所述拍摄视野内消失的时间间隔超出设定时间间隔的情况下,判定本次跟踪失败。
  10. 根据权利要求7至9中任一项所述的装置,其特征在于,所述调整控制单元包括:
    旋转控制模块,用于向所述无人驾驶飞行器的云台控制模块发送旋转控制命令,以调整所述无人驾驶飞行器的云台的旋转角度;和/或
    飞行控制模块,用于向所述无人驾驶飞行器的飞控模块发送飞行控制命令,调整所述无人驾驶飞行器的飞行动作。
  11. 根据权利要求7至9中任一项所述的装置,其特征在于,所述跟踪学习检测算法通过移动图形处理单元来执行。
  12. 根据权利要求7至9中任一项所述的装置,其特征在于,所述跟踪学习检测单元包括:
    检测模块,用于根据已经训练得到的目标模型,在当前帧图像检测得到与所述被跟踪目标特征相符合的多个图像区域;
    跟踪模块,与所述检测模块连接,用于在所拍摄的所述被跟踪目标的视频流中,跟踪得到所述被跟踪目标在连续的帧图像之间的运动状态,并根据所述运动状态在所述检测模块所确定的多个图像区域中确定所述被跟踪目标在所述当前帧图像的位置,以判断所述被跟踪目标是否处于所述无人驾驶飞行器的拍摄视野内;
    学习模块,与所述检测模块和所述跟踪模块分别连接,用于采用PN学习算法根据所述检测模块与所述跟踪模块的结果确定最新的训练样本,并采用最新的训练样本更新所述目标模型。
PCT/CN2016/086303 2016-04-29 2016-06-17 目标跟踪方法和装置 WO2017185503A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610282383.5 2016-04-29
CN201610282383.5A CN105957109A (zh) 2016-04-29 2016-04-29 目标跟踪方法和装置

Publications (1)

Publication Number Publication Date
WO2017185503A1 true WO2017185503A1 (zh) 2017-11-02

Family

ID=56913162

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/086303 WO2017185503A1 (zh) 2016-04-29 2016-06-17 目标跟踪方法和装置

Country Status (2)

Country Link
CN (1) CN105957109A (zh)
WO (1) WO2017185503A1 (zh)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909024A (zh) * 2017-11-13 2018-04-13 哈尔滨理工大学 基于图像识别和红外避障的车辆跟踪系统、方法及车辆
CN107967692A (zh) * 2017-11-28 2018-04-27 西安电子科技大学 一种基于跟踪学习检测的目标跟踪优化方法
CN108447079A (zh) * 2018-03-12 2018-08-24 中国计量大学 一种基于tld算法框架的目标跟踪方法
CN110362095A (zh) * 2019-08-09 2019-10-22 大连海事大学 一种有限时间收敛无人船协同控制器的设计方法
CN111127509A (zh) * 2018-10-31 2020-05-08 杭州海康威视数字技术股份有限公司 目标跟踪方法、装置和计算机可读存储介质
CN111784737A (zh) * 2020-06-10 2020-10-16 中国人民解放军军事科学院国防科技创新研究院 一种基于无人机平台的目标自动跟踪方法及系统
CN111932588A (zh) * 2020-08-07 2020-11-13 浙江大学 一种基于深度学习的机载无人机多目标跟踪系统的跟踪方法
CN111986230A (zh) * 2019-05-23 2020-11-24 北京地平线机器人技术研发有限公司 一种视频中目标物的姿态跟踪方法及装置
CN112102365A (zh) * 2020-09-23 2020-12-18 烟台艾睿光电科技有限公司 一种基于无人机吊舱的目标跟踪方法及相关装置
CN112233141A (zh) * 2020-09-28 2021-01-15 国网浙江省电力有限公司杭州供电公司 电力场景下基于无人机视觉的运动目标追踪方法及系统
CN113096156A (zh) * 2021-04-23 2021-07-09 中国科学技术大学 面向自动驾驶的端到端实时三维多目标追踪方法及装置
CN113449566A (zh) * 2020-03-27 2021-09-28 北京机械设备研究所 人在回路的“低慢小”目标智能图像跟踪方法及系统
CN114556904A (zh) * 2020-12-30 2022-05-27 深圳市大疆创新科技有限公司 云台系统的控制方法、控制设备、云台系统和存储介质
CN115865939A (zh) * 2022-11-08 2023-03-28 燕山大学 一种基于边云协同决策的目标检测与追踪系统及方法

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106454108B (zh) * 2016-11-04 2019-05-03 北京百度网讯科技有限公司 基于人工智能的跟踪拍摄方法、装置和电子设备
CN106774398A (zh) * 2016-12-20 2017-05-31 北京小米移动软件有限公司 航拍方法及装置、无人机
CN108537726B (zh) * 2017-03-03 2022-01-04 杭州海康威视数字技术股份有限公司 一种跟踪拍摄的方法、设备和无人机
US10720672B2 (en) 2017-04-24 2020-07-21 Autel Robotics Co., Ltd Series-multiple battery pack management system
WO2019140609A1 (zh) * 2018-01-18 2019-07-25 深圳市道通智能航空技术有限公司 目标检测方法及无人机
CN108577980A (zh) * 2018-02-08 2018-09-28 南方医科大学南方医院 一种对超声刀头进行自动跟踪的方法、系统及装置
CN110310300B (zh) * 2018-03-20 2023-09-08 腾讯科技(深圳)有限公司 一种虚拟环境中的目标跟随拍摄方法及装置、电子设备
CN109190676B (zh) * 2018-08-06 2022-11-08 百度在线网络技术(北京)有限公司 一种用于图像识别的模型训练方法、装置、设备及存储介质
CN109785661A (zh) * 2019-02-01 2019-05-21 广东工业大学 一种基于机器学习的停车引导方法
CN117237406A (zh) * 2022-06-08 2023-12-15 珠海一微半导体股份有限公司 一种机器人视觉跟踪方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060077255A1 (en) * 2004-08-10 2006-04-13 Hui Cheng Method and system for performing adaptive image acquisition
CN1953547A (zh) * 2006-09-21 2007-04-25 上海大学 无人飞行器对地面移动目标的低空跟踪系统及方法
CN102355574A (zh) * 2011-10-17 2012-02-15 上海大学 机载云台运动目标自主跟踪系统的图像稳定方法
CN103838244A (zh) * 2014-03-20 2014-06-04 湖南大学 基于四轴飞行器的便携式目标跟踪方法及系统
CN105279773A (zh) * 2015-10-27 2016-01-27 杭州电子科技大学 一种基于tld框架的改进型视频跟踪优化方法
CN105487552A (zh) * 2016-01-07 2016-04-13 深圳一电航空技术有限公司 无人机跟踪拍摄的方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103149939B (zh) * 2013-02-26 2015-10-21 北京航空航天大学 一种基于视觉的无人机动态目标跟踪与定位方法
CN104408725B (zh) * 2014-11-28 2017-07-04 中国航天时代电子公司 一种基于tld优化算法的目标重捕获系统及方法
CN105424006B (zh) * 2015-11-02 2017-11-24 国网山东省电力公司电力科学研究院 基于双目视觉的无人机悬停精度测量方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060077255A1 (en) * 2004-08-10 2006-04-13 Hui Cheng Method and system for performing adaptive image acquisition
CN1953547A (zh) * 2006-09-21 2007-04-25 上海大学 无人飞行器对地面移动目标的低空跟踪系统及方法
CN102355574A (zh) * 2011-10-17 2012-02-15 上海大学 机载云台运动目标自主跟踪系统的图像稳定方法
CN103838244A (zh) * 2014-03-20 2014-06-04 湖南大学 基于四轴飞行器的便携式目标跟踪方法及系统
CN105279773A (zh) * 2015-10-27 2016-01-27 杭州电子科技大学 一种基于tld框架的改进型视频跟踪优化方法
CN105487552A (zh) * 2016-01-07 2016-04-13 深圳一电航空技术有限公司 无人机跟踪拍摄的方法及装置

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909024A (zh) * 2017-11-13 2018-04-13 哈尔滨理工大学 基于图像识别和红外避障的车辆跟踪系统、方法及车辆
CN107909024B (zh) * 2017-11-13 2021-11-05 哈尔滨理工大学 基于图像识别和红外避障的车辆跟踪系统、方法及车辆
CN107967692A (zh) * 2017-11-28 2018-04-27 西安电子科技大学 一种基于跟踪学习检测的目标跟踪优化方法
CN108447079A (zh) * 2018-03-12 2018-08-24 中国计量大学 一种基于tld算法框架的目标跟踪方法
CN111127509A (zh) * 2018-10-31 2020-05-08 杭州海康威视数字技术股份有限公司 目标跟踪方法、装置和计算机可读存储介质
CN111127509B (zh) * 2018-10-31 2023-09-01 杭州海康威视数字技术股份有限公司 目标跟踪方法、装置和计算机可读存储介质
CN111986230A (zh) * 2019-05-23 2020-11-24 北京地平线机器人技术研发有限公司 一种视频中目标物的姿态跟踪方法及装置
CN110362095B (zh) * 2019-08-09 2022-04-01 大连海事大学 一种有限时间收敛无人船协同控制器的设计方法
CN110362095A (zh) * 2019-08-09 2019-10-22 大连海事大学 一种有限时间收敛无人船协同控制器的设计方法
CN113449566A (zh) * 2020-03-27 2021-09-28 北京机械设备研究所 人在回路的“低慢小”目标智能图像跟踪方法及系统
CN111784737A (zh) * 2020-06-10 2020-10-16 中国人民解放军军事科学院国防科技创新研究院 一种基于无人机平台的目标自动跟踪方法及系统
CN111932588A (zh) * 2020-08-07 2020-11-13 浙江大学 一种基于深度学习的机载无人机多目标跟踪系统的跟踪方法
CN111932588B (zh) * 2020-08-07 2024-01-30 浙江大学 一种基于深度学习的机载无人机多目标跟踪系统的跟踪方法
CN112102365A (zh) * 2020-09-23 2020-12-18 烟台艾睿光电科技有限公司 一种基于无人机吊舱的目标跟踪方法及相关装置
CN112233141B (zh) * 2020-09-28 2022-10-14 国网浙江省电力有限公司杭州供电公司 电力场景下基于无人机视觉的运动目标追踪方法及系统
CN112233141A (zh) * 2020-09-28 2021-01-15 国网浙江省电力有限公司杭州供电公司 电力场景下基于无人机视觉的运动目标追踪方法及系统
CN114556904A (zh) * 2020-12-30 2022-05-27 深圳市大疆创新科技有限公司 云台系统的控制方法、控制设备、云台系统和存储介质
CN113096156A (zh) * 2021-04-23 2021-07-09 中国科学技术大学 面向自动驾驶的端到端实时三维多目标追踪方法及装置
CN115865939A (zh) * 2022-11-08 2023-03-28 燕山大学 一种基于边云协同决策的目标检测与追踪系统及方法

Also Published As

Publication number Publication date
CN105957109A (zh) 2016-09-21

Similar Documents

Publication Publication Date Title
WO2017185503A1 (zh) 目标跟踪方法和装置
Wang et al. Development of UAV-based target tracking and recognition systems
US11205274B2 (en) High-performance visual object tracking for embedded vision systems
US10818028B2 (en) Detecting objects in crowds using geometric context
US11182592B2 (en) Target object recognition method and apparatus, storage medium, and electronic device
US10510157B2 (en) Method and apparatus for real-time face-tracking and face-pose-selection on embedded vision systems
JP6942488B2 (ja) 画像処理装置、画像処理システム、画像処理方法、及びプログラム
KR102126513B1 (ko) 카메라의 포즈를 판단하기 위한 장치 및 방법
CN106780608B (zh) 位姿信息估计方法、装置和可移动设备
US9760791B2 (en) Method and system for object tracking
Huang et al. Bridging the gap between detection and tracking: A unified approach
Chen et al. A deep learning approach to drone monitoring
Kart et al. How to make an rgbd tracker?
Bagautdinov et al. Probability occupancy maps for occluded depth images
JP2018522348A (ja) センサーの3次元姿勢を推定する方法及びシステム
Leykin et al. Thermal-visible video fusion for moving target tracking and pedestrian classification
JP7272024B2 (ja) 物体追跡装置、監視システムおよび物体追跡方法
US11508157B2 (en) Device and method of objective identification and driving assistance device
US11810311B2 (en) Two-stage depth estimation machine learning algorithm and spherical warping layer for equi-rectangular projection stereo matching
CN112562159B (zh) 一种门禁控制方法、装置、计算机设备和存储介质
CN112184757A (zh) 运动轨迹的确定方法及装置、存储介质、电子装置
Haggui et al. Human detection in moving fisheye camera using an improved YOLOv3 framework
Wang et al. Object as query: Lifting any 2d object detector to 3d detection
Zhang et al. A novel efficient method for abnormal face detection in ATM
Le et al. Human detection and tracking for autonomous human-following quadcopter

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16900013

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16900013

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20/03/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16900013

Country of ref document: EP

Kind code of ref document: A1