CN105844664A - Monitoring video vehicle detection tracking method based on improved TLD - Google Patents
Monitoring video vehicle detection tracking method based on improved TLD Download PDFInfo
- Publication number
- CN105844664A CN105844664A CN201610159169.0A CN201610159169A CN105844664A CN 105844664 A CN105844664 A CN 105844664A CN 201610159169 A CN201610159169 A CN 201610159169A CN 105844664 A CN105844664 A CN 105844664A
- Authority
- CN
- China
- Prior art keywords
- cam shift
- random forest
- target
- result
- tracker
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 28
- 238000000034 method Methods 0.000 title claims abstract description 14
- 238000012544 monitoring process Methods 0.000 title abstract description 9
- 238000007637 random forest analysis Methods 0.000 claims abstract description 29
- 238000004422 calculation algorithm Methods 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000002844 melting Methods 0.000 claims 1
- 230000008018 melting Effects 0.000 claims 1
- 239000000700 radioactive tracer Substances 0.000 abstract 2
- 230000003287 optical effect Effects 0.000 description 3
- 230000007774 longterm Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及智能交通视频处理领域,尤其是一种准确性高的、鲁棒性好的基于改进TLD的监控视频车辆检测跟踪方法。The invention relates to the field of intelligent traffic video processing, in particular to a monitoring video vehicle detection and tracking method based on improved TLD with high accuracy and good robustness.
背景技术Background technique
L-K光流法对目标的逐帧跟踪是在假定无目标消失或完全遮挡的环境下进行的,其也被称为短期跟踪器。该类跟踪器通常缺少对发生跟踪错误后的直接处理,很难在长时间的目标跟踪中取得好的效果。目前,人们对该类短期跟踪方法的研究主要集中在对跟踪精度和速度的提高,以及延长跟踪时间方面,但在跟踪精度不理想的情况下,却不能有效避免跟踪误差的积累和漂移现象。近年来,出现了一种新的单目标长时间跟踪算法TLD(Tracking Learning - Detection),该算法将跟踪算法和检测算法进行结合,从而克服了目标在跟踪过程中发生形变和部分遮挡的问题;同时,该算法引进了一种在线学习机制,将跟踪器和检测器所获得的结果输入给学习模块,又将学习后的模型反馈给跟踪和检测模块,从而使目标的检测与跟踪更加稳定有效。然而,由于在跟踪模块采用了基于运动一致连贯性假设的L-K光流法,该算法对周期较短、帧间运动有限、肉眼可见情况下的目标在连续帧间的运动具有较好的预测结果,而对于那些大幅度、快速运动的目标,其预测和跟踪性能却尚不理想。The frame-by-frame tracking of the target by the L-K optical flow method is performed under the assumption that no target disappears or is completely occluded, which is also called a short-term tracker. This type of tracker usually lacks direct processing after tracking errors occur, and it is difficult to achieve good results in long-term target tracking. At present, people's research on this kind of short-term tracking method mainly focuses on improving the tracking accuracy and speed, and prolonging the tracking time. However, when the tracking accuracy is not ideal, the accumulation of tracking errors and drift cannot be effectively avoided. In recent years, a new single-target long-term tracking algorithm TLD (Tracking Learning - Detection) has emerged, which combines tracking algorithms and detection algorithms to overcome the problems of deformation and partial occlusion of the target during the tracking process; At the same time, the algorithm introduces an online learning mechanism, which inputs the results obtained by the tracker and detector to the learning module, and feeds the learned model back to the tracking and detection module, so that the detection and tracking of the target are more stable and effective . However, since the tracking module uses the L-K optical flow method based on the assumption of motion consistency and coherence, the algorithm has a better prediction result for the target's motion between consecutive frames when the cycle is short, the motion between frames is limited, and the naked eye is visible. , but for those large and fast moving targets, its prediction and tracking performance is not ideal yet.
发明内容Contents of the invention
本发明是为了解决现有技术所存在的上述技术问题,提供一种准确性高的、鲁棒性好的基于改进TLD的监控视频车辆检测跟踪方法。The purpose of the present invention is to solve the above-mentioned technical problems existing in the prior art, and to provide a monitoring video vehicle detection and tracking method based on the improved TLD with high accuracy and good robustness.
本发明的技术解决方案是:一种基于改进TLD的监控视频车辆检测跟踪方法,其特征在于按照以下步骤进行:The technical solution of the present invention is: a kind of surveillance video vehicle detection and tracking method based on improved TLD, it is characterized in that carrying out according to the following steps:
Step 1. 输入第1帧视频图像,手动标记出待跟踪的目标,令;Step 1. Input the first frame of video image, manually mark the target to be tracked, make ;
Step 2. 初始化随机森林分类器和Cam Shift跟踪器;Step 2. Initialize the random forest classifier and Cam Shift tracker;
Step 3. 令,载入第帧视频图像,并利用随机森林分类器检测目标,利用CamShift 跟踪器跟踪目标并得到目标框的调整尺度;Step 3. Order , load the first Frame the video image, and use the random forest classifier to detect the target, use the CamShift tracker to track the target and get the adjusted scale of the target frame;
Step 4. 将随机森林分类器的检测结果与Cam Shift 跟踪器的跟踪结果相融合;Step 4. Fuse the detection results of the random forest classifier with the tracking results of the Cam Shift tracker;
Step 5. 利用P-N 学习策略更新随机森林分类器,获得目标的位置;Step 5. Use the P-N learning strategy to update the random forest classifier to obtain the position of the target;
Step 6. 若视频已经到达最后一帧,则算法结束;否则,转入Step 3。Step 6. If the video has reached the last frame, the algorithm ends; otherwise, go to Step 3.
所述Step 4如下:The Step 4 is as follows:
Step 4.1 如果随机森林分类器和Cam Shift跟踪器都有边界框作为输出,但随机森林分类器有多个相似位置被判定出来,而Cam Shift跟踪器仅找到一个目标位置,此时以空间重叠度对若干检测结果进行聚类分割;Step 4.1 If both the random forest classifier and the Cam Shift tracker have bounding boxes as output, but the random forest classifier has multiple similar positions determined, while the Cam Shift tracker only finds one target position, at this time the spatial overlap Perform clustering and segmentation on several detection results;
Step 4.2 如果Cam Shift跟踪器没有边界框输出,而随机森林分类器有边界框输出,那么对多个检测结果以空间重叠度的聚类进行分割,此时采用第一个聚类分割结果作为融合结果;Step 4.2 If the Cam Shift tracker has no bounding box output, but the random forest classifier has a bounding box output, then multiple detection results are segmented by clustering with spatial overlap, and the first cluster segmentation result is used as the fusion result;
Step 4.3 如果一个相关值较大的聚类决策结果出现,但是该决策结果与Cam Shift跟踪器结果相差较远,则采用该决策结果作为融合结果,然后对Cam Shift跟踪器重新进行初始化并丢掉原来认为正确的样本集;Step 4.3 If a clustering decision result with a large correlation value appears, but the decision result is far from the result of the Cam Shift tracker, use the decision result as the fusion result, and then re-initialize the Cam Shift tracker and discard the original The sample set that is considered correct;
Step 4.4 如果Cam Shift跟踪器有边界框输出,而随机森林分类器并无边界框输出,那么采用Cam Shift跟踪器输出的结果作为融合结果;Step 4.4 If the Cam Shift tracker has a bounding box output, but the Random Forest classifier has no bounding box output, then use the result output by the Cam Shift tracker as the fusion result;
Step 4.5 如果随机森林分类器和Cam Shift跟踪器均无边界框输出,则认为目标消失。Step 4.5 If neither the Random Forest classifier nor the Cam Shift tracker outputs a bounding box, the target is considered to have disappeared.
本发明采用基于车辆颜色特征的分块Cam Shift跟踪器替代L-K光流的点跟踪器,通过Cam Shift所获取的车辆区域颜色直方图实现对跟踪目标的描述,再通过捕捉区域的颜色直方图相似性度量实现对跟踪目标在前后两帧间运动量的预估;进一步结合随机森林检测器获得车辆目标的粗略位置,以及通过P-N学习实时地对检测器进行观测和对跟踪器进行定位,从而实现有效的车辆检测跟踪。与现有的技术相比,本发明提高了在长时间跟踪过程中大幅度、快速变化下的运动车辆跟踪的准确性和鲁棒性。The present invention adopts the segmented Cam Shift tracker based on vehicle color features to replace the point tracker of L-K optical flow, realizes the description of the tracking target through the color histogram of the vehicle area obtained by Cam Shift, and then captures the similarity of the color histogram of the area Realize the prediction of the motion of the tracking target between the two frames before and after the tracking target; further combine the random forest detector to obtain the rough position of the vehicle target, and observe the detector and position the tracker in real time through P-N learning, so as to achieve effective vehicle detection and tracking. Compared with the prior art, the present invention improves the tracking accuracy and robustness of the moving vehicle under large and rapid changes in the long-time tracking process.
附图说明Description of drawings
图1是本发明实施例的流程图。Fig. 1 is a flowchart of an embodiment of the present invention.
图2是倾斜角度城市监控场景下的检测跟踪结果对比图。Figure 2 is a comparison of the detection and tracking results in the urban surveillance scene at an oblique angle.
图3是高空城市监控视频场景下的检测跟踪结果对比图。Figure 3 is a comparison of detection and tracking results in high-altitude city surveillance video scenarios.
图4是雨天城市公路监控视频场景下的检测跟踪结果对比图。Figure 4 is a comparison chart of detection and tracking results in a rainy urban road surveillance video scene.
图5是斜角高速公路监控视频场景下的检测跟踪结果对比图。Figure 5 is a comparison of the detection and tracking results in the surveillance video scene of the oblique expressway.
具体实施方式detailed description
如图1所示:基于改进TLD的监控视频车辆检测跟踪算法步骤如下:As shown in Figure 1: The steps of the monitoring video vehicle detection and tracking algorithm based on the improved TLD are as follows:
Step 1. 输入第1帧视频图像,并以人工的方式手动标记出待跟踪的目标,令;Step 1. Input the first frame of video image, and manually mark the target to be tracked manually, so that ;
Step 2. 初始化随机森林分类器和Cam Shift跟踪器;Step 2. Initialize the random forest classifier and Cam Shift tracker;
Step 3. 令,载入第帧视频图像,利用随机森林分类器检测目标,利用CamShift 跟踪器跟踪目标,从而得到目标框的调整尺度;Step 3. Order , load the first Frame the video image, use the random forest classifier to detect the target, and use the CamShift tracker to track the target, so as to obtain the adjusted scale of the target frame;
Step 4. 将随机森林分类器的检测结果与Cam Shift 跟踪器的跟踪结果相融合;Step 4. Fuse the detection results of the random forest classifier with the tracking results of the Cam Shift tracker;
Step 5. 利用P-N 学习策略更新随机森林分类器,获得目标的位置;Step 5. Use the P-N learning strategy to update the random forest classifier to obtain the position of the target;
Step 6. 若视频已经到达最后一帧,则算法结束;否则,转入Step 3。Step 6. If the video has reached the last frame, the algorithm ends; otherwise, go to Step 3.
所述Step 4包含如下步骤:Described Step 4 comprises the following steps:
Step 4.1 如果随机森林分类器和Cam Shift跟踪器都有边界框作为输出,但随机森林分类器有多个相似位置被判定出来,而Cam Shift跟踪器仅找到一个目标位置,此时以空间重叠度对若干检测结果进行聚类分割;Step 4.1 If both the random forest classifier and the Cam Shift tracker have bounding boxes as output, but the random forest classifier has multiple similar positions determined, while the Cam Shift tracker only finds one target position, at this time the spatial overlap Perform clustering and segmentation on several detection results;
Step 4.2 如果Cam Shift跟踪器没有边界框输出,而随机森林分类器有边界框输出,那么对多个检测结果以空间重叠度的聚类进行分割,此时采用第一个聚类分割结果作为融合结果;Step 4.2 If the Cam Shift tracker has no bounding box output, but the random forest classifier has a bounding box output, then multiple detection results are segmented by clustering with spatial overlap, and the first cluster segmentation result is used as the fusion result;
Step 4.3 如果一个相关值较大的聚类决策结果出现,但是该决策结果与Cam Shift跟踪器结果相差较远,则采用该决策结果作为融合结果,然后对Cam Shift跟踪器重新进行初始化并丢掉原来认为正确的样本集;Step 4.3 If a clustering decision result with a large correlation value appears, but the decision result is far from the result of the Cam Shift tracker, use the decision result as the fusion result, and then re-initialize the Cam Shift tracker and discard the original The sample set that is considered correct;
Step 4.4 如果Cam Shift跟踪器有边界框输出,而随机森林分类器并无边界框输出,那么采用Cam Shift跟踪器输出的结果作为融合结果;Step 4.4 If the Cam Shift tracker has a bounding box output, but the Random Forest classifier has no bounding box output, then use the result output by the Cam Shift tracker as the fusion result;
Step 4.5 如果随机森林分类器和Cam Shift跟踪器均无边界框输出,则认为目标消失。Step 4.5 If neither the Random Forest classifier nor the Cam Shift tracker outputs a bounding box, the target is considered to have disappeared.
本发明实施例倾斜角度城市监控场景下的检测跟踪结果对比如图2所示。The comparison of the detection and tracking results in the urban monitoring scene at an inclination angle according to the embodiment of the present invention is shown in FIG. 2 .
本发明实施例高空城市监控视频场景下的检测跟踪结果对比如图3所示。The comparison of the detection and tracking results in the high-altitude city surveillance video scene of the embodiment of the present invention is shown in FIG. 3 .
本发明实施例雨天城市公路监控视频场景下的检测跟踪结果对比如图4所示。The comparison of the detection and tracking results in the monitoring video scene of urban roads in rainy days according to the embodiment of the present invention is shown in FIG. 4 .
本发明实施例斜角高速公路监控视频场景下的检测跟踪结果对比如图5所示。The comparison of the detection and tracking results in the monitoring video scene of the oblique expressway in the embodiment of the present invention is shown in FIG. 5 .
本发明实施例依次在上述不同场景下的程序运行时间对比如表1:The comparison of program running time in the above-mentioned different scenarios according to the embodiment of the present invention is shown in Table 1:
表1Table 1
本发明实施例依次在上述不同场景下的跟踪质量对比如表2:The tracking quality comparison of the embodiments of the present invention in the above-mentioned different scenarios is shown in Table 2:
表2Table 2
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610159169.0A CN105844664B (en) | 2016-03-21 | 2016-03-21 | Based on the monitor video vehicle detecting and tracking method for improving TLD |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610159169.0A CN105844664B (en) | 2016-03-21 | 2016-03-21 | Based on the monitor video vehicle detecting and tracking method for improving TLD |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105844664A true CN105844664A (en) | 2016-08-10 |
CN105844664B CN105844664B (en) | 2019-01-11 |
Family
ID=56588349
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610159169.0A Expired - Fee Related CN105844664B (en) | 2016-03-21 | 2016-03-21 | Based on the monitor video vehicle detecting and tracking method for improving TLD |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105844664B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106296708A (en) * | 2016-08-18 | 2017-01-04 | 宁波傲视智绘光电科技有限公司 | Car tracing method and apparatus |
CN107403439A (en) * | 2017-06-06 | 2017-11-28 | 沈阳工业大学 | Predicting tracing method based on Cam shift |
CN107909024A (en) * | 2017-11-13 | 2018-04-13 | 哈尔滨理工大学 | Vehicle tracking system, method and vehicle based on image recognition and infrared obstacle avoidance |
CN108876809A (en) * | 2018-06-17 | 2018-11-23 | 天津理工大学 | A kind of TLD image tracking algorithm based on Kalman filtering |
CN112766038A (en) * | 2020-12-22 | 2021-05-07 | 深圳金证引擎科技有限公司 | Vehicle tracking method based on image recognition |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102881024A (en) * | 2012-08-24 | 2013-01-16 | 南京航空航天大学 | Tracking-learning-detection (TLD)-based video object tracking method |
WO2014189685A1 (en) * | 2013-05-23 | 2014-11-27 | Fastvdo Llc | Motion-assisted visual language for human computer interfaces |
CN104331901A (en) * | 2014-11-26 | 2015-02-04 | 北京邮电大学 | TLD-based multi-view target tracking device and method |
CN104574439A (en) * | 2014-12-25 | 2015-04-29 | 南京邮电大学 | Kalman filtering and TLD (tracking-learning-detection) algorithm integrated target tracking method |
-
2016
- 2016-03-21 CN CN201610159169.0A patent/CN105844664B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102881024A (en) * | 2012-08-24 | 2013-01-16 | 南京航空航天大学 | Tracking-learning-detection (TLD)-based video object tracking method |
WO2014189685A1 (en) * | 2013-05-23 | 2014-11-27 | Fastvdo Llc | Motion-assisted visual language for human computer interfaces |
CN104331901A (en) * | 2014-11-26 | 2015-02-04 | 北京邮电大学 | TLD-based multi-view target tracking device and method |
CN104574439A (en) * | 2014-12-25 | 2015-04-29 | 南京邮电大学 | Kalman filtering and TLD (tracking-learning-detection) algorithm integrated target tracking method |
Non-Patent Citations (1)
Title |
---|
ZDENEK KALAL ET AL.: "《Tracking-Learning-Detection》", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106296708A (en) * | 2016-08-18 | 2017-01-04 | 宁波傲视智绘光电科技有限公司 | Car tracing method and apparatus |
CN106296708B (en) * | 2016-08-18 | 2019-02-15 | 宁波傲视智绘光电科技有限公司 | Car tracing method and apparatus |
CN107403439A (en) * | 2017-06-06 | 2017-11-28 | 沈阳工业大学 | Predicting tracing method based on Cam shift |
CN107403439B (en) * | 2017-06-06 | 2020-07-24 | 沈阳工业大学 | Cam-shift-based prediction tracking method |
CN107909024A (en) * | 2017-11-13 | 2018-04-13 | 哈尔滨理工大学 | Vehicle tracking system, method and vehicle based on image recognition and infrared obstacle avoidance |
CN107909024B (en) * | 2017-11-13 | 2021-11-05 | 哈尔滨理工大学 | Vehicle tracking system, method and vehicle based on image recognition and infrared obstacle avoidance |
CN108876809A (en) * | 2018-06-17 | 2018-11-23 | 天津理工大学 | A kind of TLD image tracking algorithm based on Kalman filtering |
CN108876809B (en) * | 2018-06-17 | 2021-07-20 | 天津理工大学 | A TLD Image Tracking Algorithm Based on Kalman Filtering |
CN112766038A (en) * | 2020-12-22 | 2021-05-07 | 深圳金证引擎科技有限公司 | Vehicle tracking method based on image recognition |
CN112766038B (en) * | 2020-12-22 | 2021-12-17 | 深圳金证引擎科技有限公司 | Vehicle tracking method based on image recognition |
Also Published As
Publication number | Publication date |
---|---|
CN105844664B (en) | 2019-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tripathi et al. | Removal of rain from videos: a review | |
Sina et al. | Vehicle counting and speed measurement using headlight detection | |
WO2020151172A1 (en) | Moving object detection method and apparatus, computer device, and storage medium | |
CN112069969B (en) | Expressway monitoring video cross-mirror vehicle tracking method and system | |
CN102722698B (en) | Method and system for detecting and tracking multi-pose face | |
CN108038837B (en) | Method and system for detecting target in video | |
CN105844664A (en) | Monitoring video vehicle detection tracking method based on improved TLD | |
CN107452015B (en) | A Target Tracking System with Redetection Mechanism | |
CN103530893B (en) | Based on the foreground detection method of background subtraction and movable information under camera shake scene | |
CN103646257B (en) | A kind of pedestrian detection and method of counting based on video monitoring image | |
CN104992453B (en) | Target in complex environment tracking based on extreme learning machine | |
CN112036254A (en) | Moving vehicle foreground detection method based on video image | |
CN110555868A (en) | method for detecting small moving target under complex ground background | |
CN104978567B (en) | Vehicle checking method based on scene classification | |
KR101716928B1 (en) | Image processing method for vehicle camera and image processing apparatus usnig the same | |
CN108009494A (en) | A kind of intersection wireless vehicle tracking based on unmanned plane | |
CN106778540B (en) | Parking detection is accurately based on the parking event detecting method of background double layer | |
CN109145708A (en) | A kind of people flow rate statistical method based on the fusion of RGB and D information | |
CN102497505A (en) | Multi-ball machine linkage target tracking method and system based on improved Meanshift algorithm | |
EP2813973B1 (en) | Method and system for processing video image | |
CN108764338B (en) | A pedestrian tracking method applied to video analysis | |
CN108920997A (en) | Judge that non-rigid targets whether there is the tracking blocked based on profile | |
CN107123130A (en) | Kernel correlation filtering target tracking method based on superpixel and hybrid hash | |
Arróspide et al. | On-board robust vehicle detection and tracking using adaptive quality evaluation | |
Denman et al. | Multi-spectral fusion for surveillance systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190111 Termination date: 20210321 |