CN109118523A - A kind of tracking image target method based on YOLO - Google Patents

A kind of tracking image target method based on YOLO Download PDF

Info

Publication number
CN109118523A
CN109118523A CN201811097244.0A CN201811097244A CN109118523A CN 109118523 A CN109118523 A CN 109118523A CN 201811097244 A CN201811097244 A CN 201811097244A CN 109118523 A CN109118523 A CN 109118523A
Authority
CN
China
Prior art keywords
target
frame
detection
present frame
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811097244.0A
Other languages
Chinese (zh)
Other versions
CN109118523B (en
Inventor
王宏
张巍
李建清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201811097244.0A priority Critical patent/CN109118523B/en
Publication of CN109118523A publication Critical patent/CN109118523A/en
Application granted granted Critical
Publication of CN109118523B publication Critical patent/CN109118523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The tracking image target method based on YOLO that the invention discloses a kind of, comprising the following steps: S1, input video;S2, target, initialized card Thalmann filter are detected using target detection network YOLO;S3, detection current frame image, it is no to then follow the steps S5 if detecting that target goes to step S4;S4, the detection position and predicted position that calculate current frame image target friendship and ratio use the detection position of target as target in the position of present frame if handing over and comparing greater than preset threshold;S5, target is done into key point matching in the predicted position of present frame in the position of previous frame and target, if matching obtains target in the position of present frame to preset threshold is greater than;S6, check whether video detects and terminate, if then terminating to track, otherwise return step S3.The present invention by previous frame target position and Kalman filtering obtains target and does key point in predicted current frame position matching to judge that predicted position with the presence or absence of target, can effectively improve the accuracy rate of tracking.

Description

A kind of tracking image target method based on YOLO
Technical field
The invention belongs to target tracking domain, in particular to a kind of image for being based on YOLO (You Only Look Once) Method for tracking target.
Background technique
Target following has widely in fields such as military guidance, vision guided navigation, robot, intelligent transportation, public safeties Using.For example, the tracking of vehicle is exactly essential in vehicle violation capturing system.In intrusion detection, people, animal, The key point of the detection and tracking and whole system operation of the large size moving target such as vehicle.
Target detection network YOLO is a kind of deep learning method of computer vision field, is mainly used for single-frame images Detection and identification, the object detection method based on manual feature that compares have higher accuracy in detection and faster detection speed Degree.Target following based on detection is a kind of common method for tracking target, is existed by comparing object detection method detection target The position of position and prediction technique prediction target in the picture in image obtains the tracing positional of target in the picture.Although YOLO, which detects single-frame images, has good detection performance, but in video detection or image sequence detection process, be easy by Change to illumination, shooting angle, target scale or the influences such as target part blocks cause to detect target missing inspection, detection target does not connect Continuous, will lead to can not judge that tracked target whether there is so as to cause tracking failure by predicted position.
Summary of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide one kind to pass through previous frame target position and Kalman Filtering obtains target and does key point matching in predicted current frame position to judge that predicted position, can be effective with the presence or absence of target Improve the tracking image target method based on YOLO of the accuracy rate of tracking.
The purpose of the present invention is achieved through the following technical solutions: a kind of tracking image target side based on YOLO Method, comprising the following steps:
S1, input video;
S2, target, initialized card Thalmann filter are detected using target detection network YOLO;
S3, it is otherwise held using target detection network YOLO detection current frame image if detecting that target goes to step S4 Row step S5;
S4, calculate current frame image target detection position and predicted position friendship and ratio, if hands over and compare greater than preset Threshold value uses the detection position of target as target in the position of present frame, otherwise return step S2;
S5, target is done into key point matching in the predicted position of present frame in the position of previous frame and target, if matching To preset threshold is greater than, target is obtained in the position of present frame, otherwise return step S2;
S6, check whether video detects and terminate, if then terminating to track, otherwise return step S3.
Further, the step S2 concrete methods of realizing are as follows:
S21, input video is detected using target detection network YOLO;
If S22, detecting that target goes to S23, S21 is otherwise gone to;
S23, the detection location information initialized card Thalmann filter using target, with the boundary rectangle of target in the picture The location information of frame expression target.
Further, the step S4 includes following sub-step:
S41, setting are handed over and are 0.6 than threshold value;
S42, friendship and the ratio for detecting position and predicted position for calculating current frame image target are handed over simultaneously as follows than calculating:
Wherein, iou indicates friendship and the ratio of detection position and predicted position, SdetectionIt is the external square of target detection position The area of shape frame, SprdictionIt is the area of the boundary rectangle frame of target predicted position;
If S43, handing over and being greater than preset threshold 0.6 than iou, use the detection position of target as target in present frame Position;Otherwise return step S2.
Further, the step S5 includes following sub-step:
S51, setting matching are 10 to threshold value;
S52, SIFT key is carried out respectively in the predicted position of present frame in the detection position of previous frame and target to target Point detection, and corresponding SIFT key point is extracted, the corresponding SIFT feature vector of each SIFT key point;
S53, target is calculated in detection position and the target respective SIFT feature in the predicted position of present frame of previous frame The Euclidean distance of vector takes the minimum value min and sub-minimum secmin of Euclidean distance, if min < 0.6*secmin, then it is assumed that Target SIFT key point in the detection position of previous frame with target corresponds to Euclidean distance most in the predicted position of present frame The SIFT key point of small min matches, and all SIFT key points of the traversal target in the detection position of previous frame complete target In the detection position of previous frame, the SIFT key point with target in the predicted position of present frame is matched, and obtains key point matching pair, If key point matching thens follow the steps S54 to threshold value 10 is greater than;Otherwise return step S2;
S54, position and target are detected in the predicted position of present frame in previous frame using RANSAC algorithm calculating target Affine transformation matrix, four pairs are randomly selected from all matched SIFT key points pair of S53, according to this four pairs of SIFT keys The coordinate of point determines the detection position for being marked on previous frame and target in the affine change of the predicted position of present frame by following formula Change matrix H:
[x′,y′,1]T=H* [x, y, 1]T
Wherein: h'0,h'1,h'2And h'3To scale twiddle factor, Δ x and Δ y are respectively check bit of the target in previous frame Relative target is set in the predicted position offset in the x-direction and the z-direction of present frame, [x ', y ', 1]T[x, y, 1]TRespectively Position and target are detected in the homogeneous coordinates of the predicted position of present frame in previous frame in target for any pair of SIFT key point;
S55, affine transformation matrix H and target are obtained in conjunction with S54 on four vertex of the boundary rectangle of the position of previous frame, Target is respectively obtained on four vertex of the boundary rectangle of present frame, and then obtains the width and height of boundary rectangle;
S56, target is obtained according to S55 in the width and height of the boundary rectangle of present frame, prediction of the combining target in present frame The centre coordinate of the boundary rectangle of position, so that it is determined that target is in the position of present frame;
S56, using target present frame location updating Kalman filter.
The beneficial effects of the present invention are: on the one hand the present invention is made by a kind of tracking image target method based on YOLO Detection accuracy is improved with target detection network YOLO, even if on the other hand not detecting target in target detection network YOLO When, can be obtained by previous frame target position and Kalman filtering target predicted current frame position do key point match to Judge that predicted position with the presence or absence of target, can be avoided target and exist but be not detected and tracking is caused to fail, effectively mention The accuracy rate of height tracking.Key point due to using image-region matches, and avoids the key point matching of entire image, improves Tracking performance.
Detailed description of the invention
Fig. 1 is the flow chart of the tracking image target method of the invention based on YOLO;
Fig. 2 is the result figure tracked using the tracking image target method of the invention based on YOLO.
Specific embodiment
Technical solution of the present invention is further illustrated with reference to the accompanying drawing.
As shown in Figure 1, a kind of tracking image target method based on YOLO, comprising the following steps:
S1, input video;
S2, target, initialized card Thalmann filter are detected using target detection network YOLO;Concrete methods of realizing are as follows:
S21, input video is detected using target detection network YOLO;
If S22, detecting that target goes to S23, S21 is otherwise gone to;
S23, the detection location information initialized card Thalmann filter using target, with the boundary rectangle of target in the picture The location information of frame expression target.
S3, it is otherwise held using target detection network YOLO detection current frame image if detecting that target goes to step S4 Row step S5;
S4, calculate current frame image target detection position and predicted position friendship and ratio, if hands over and compare greater than preset Threshold value uses the detection position of target as target in the position of present frame, otherwise return step S2;Specifically include following sub-step It is rapid:
S41, setting are handed over and are 0.6 than threshold value;
S42, friendship and the ratio for detecting position and predicted position for calculating current frame image target are handed over simultaneously as follows than calculating:
Wherein, iou indicates friendship and the ratio of detection position and predicted position, SdetectionIt is the external square of target detection position The area of shape frame, SprdictionIt is the area of the boundary rectangle frame of target predicted position;
If S43, handing over and being greater than preset threshold 0.6 than iou, use the detection position of target as target in present frame Position;Otherwise return step S2.
S5, target is done into key point matching in the predicted position of present frame in the position of previous frame and target, if matching To preset threshold is greater than, target is obtained in the position of present frame, otherwise return step S2;Specifically include following sub-step:
S51, setting matching are 10 to threshold value;
S52, SIFT is carried out respectively in the predicted position of present frame in the detection position of previous frame and target to target (Scale-invariant fearture transform) critical point detection, and corresponding SIFT key point is extracted, each SIFT key point corresponds to a SIFT feature vector;
S53, target is calculated in detection position and the target respective SIFT feature in the predicted position of present frame of previous frame The Euclidean distance of vector takes the minimum value min and sub-minimum secmin of Euclidean distance, if min < 0.6*secmin, then it is assumed that Target SIFT key point in the detection position of previous frame with target corresponds to Euclidean distance most in the predicted position of present frame The SIFT key point of small min matches, and all SIFT key points of the traversal target in the detection position of previous frame complete target In the detection position of previous frame, the SIFT key point with target in the predicted position of present frame is matched, and obtains key point matching pair, If key point matching thens follow the steps S54 to threshold value 10 is greater than;Otherwise return step S2;
S54, target is calculated in the detection position of previous frame using RANSAC (Random sample consensus) algorithm With target in the affine transformation matrix of the predicted position of present frame, selected at random from all matched SIFT key point centerings of S53 Four pairs are taken, determines that the detection position for being marked on previous frame and target exist by following formula according to the coordinate of this four pairs of SIFT key points The affine transformation matrix H of the predicted position of present frame:
[x′,y′,1]T=H* [x, y, 1]T
Wherein: h'0,h'1,h'2And h'3To scale twiddle factor, Δ x and Δ y are respectively check bit of the target in previous frame Relative target is set in the predicted position offset in the x-direction and the z-direction of present frame, [x ', y ', 1]T[x, y, 1]TRespectively Position and target are detected in the homogeneous coordinates of the predicted position of present frame in previous frame in target for any pair of SIFT key point;
S55, affine transformation matrix H and target are obtained in conjunction with S54 on four vertex of the boundary rectangle of the position of previous frame, Target is respectively obtained on four vertex of the boundary rectangle of present frame, and then obtains the width and height of boundary rectangle;
S56, target is obtained according to S55 in the width and height of the boundary rectangle of present frame, prediction of the combining target in present frame The centre coordinate of the boundary rectangle of position, so that it is determined that target is in the position of present frame;
S56, using target present frame location updating Kalman filter.
S6, check whether video detects and terminate, if then terminating to track, otherwise return step S3.
Fig. 2 is the tracking result figure obtained using a kind of tracking image target method based on YOLO of the invention, wherein Left side one is classified as using the YOLO vehicle detection detected as a result, right side one is classified as the vehicle tracking of corresponding frame as a result, wherein 6th to 8 frame, the 10th to 11 frame and the 17th frame to the 21st frame YOLO do not detect vehicle, but a kind of base through the invention Vehicle is obtained in the 6th to 8 frame, the 10th to 11 frame and the 17th frame to the position in the 21st frame in the tracking image target method of YOLO It sets, and then improves the accuracy of tracking.
The present invention takes full advantage of the information of image sequence consecutive frame, when YOLO does not detect target, passes through target Key point matching is carried out in the predicted position of present frame in the position of previous frame and Kalman prediction target, can be judged pre- Surveying region whether there is target, avoid in the case where YOLO does not detect target conditions because that can not judge that estimation range whether there is Target and cause tracking lose limitation, improve the accuracy of target following.
Those of ordinary skill in the art will understand that the embodiments described herein, which is to help reader, understands this hair Bright principle, it should be understood that protection scope of the present invention is not limited to such specific embodiments and embodiments.This field Those of ordinary skill disclosed the technical disclosures can make according to the present invention and various not depart from the other each of essence of the invention The specific variations and combinations of kind, these variations and combinations are still within the scope of the present invention.

Claims (4)

1. a kind of tracking image target method based on YOLO, which comprises the following steps:
S1, input video;
S2, target, initialized card Thalmann filter are detected using target detection network YOLO;
S3, step is otherwise executed if detecting that target goes to step S4 using target detection network YOLO detection current frame image Rapid S5;
S4, calculate current frame image target detection position and predicted position friendship and ratio, if handing over and comparing greater than preset threshold, Use the detection position of target as target in the position of present frame, otherwise return step S2;
S5, target is done into key point matching in the predicted position of present frame in the position of previous frame and target, if matching is to big In preset threshold, target is obtained in the position of present frame, otherwise return step S2;
S6, check whether video detects and terminate, if then terminating to track, otherwise return step S3.
2. a kind of tracking image target method based on YOLO according to claim 1, which is characterized in that the step S2 Concrete methods of realizing are as follows:
S21, input video is detected using target detection network YOLO;
If S22, detecting that target goes to S23, S21 is otherwise gone to;
S23, the detection location information initialized card Thalmann filter using target, with the boundary rectangle frame table of target in the picture Show the location information of target.
3. a kind of tracking image target method based on YOLO according to claim 1, which is characterized in that the step S4 Including following sub-step:
S41, setting are handed over and are 0.6 than threshold value;
S42, friendship and the ratio for detecting position and predicted position for calculating current frame image target are handed over simultaneously as follows than calculating:
Wherein, iou indicates friendship and the ratio of detection position and predicted position, SdetectionIt is the boundary rectangle frame of target detection position Area, SprdictionIt is the area of the boundary rectangle frame of target predicted position;
If S43, handing over and being greater than preset threshold 0.6 than iou, use the detection position of target as target in the position of present frame It sets;Otherwise return step S2.
4. a kind of tracking image target method based on YOLO according to claim 1, which is characterized in that the step S5 Including following sub-step:
S51, setting matching are 10 to threshold value;
S52, SIFT key point inspection is carried out respectively in the predicted position of present frame in the detection position of previous frame and target to target It surveys, and extracts corresponding SIFT key point, the corresponding SIFT feature vector of each SIFT key point;
S53, calculating the target respective SIFT feature vector in the predicted position of present frame in the detection position of previous frame and target Euclidean distance, the minimum value min and sub-minimum secmin of Euclidean distance are taken, if min < 0.6*secmin, then it is assumed that target The SIFT key point corresponds to Euclidean distance minimum min with target in the predicted position of present frame in the detection position of previous frame SIFT key point match, traversal target previous frame detection position in all SIFT key points, complete target upper SIFT key point of the detection position of one frame with target in the predicted position of present frame matches, and obtains key point matching pair, if Key point matching thens follow the steps S54 to threshold value 10 is greater than;Otherwise return step S2;
S54, calculated using RANSAC algorithm target previous frame detection position and target present frame predicted position it is imitative Transformation matrix is penetrated, four pairs are randomly selected from all matched SIFT key points pair of S53, according to this four pairs of SIFT key points Coordinate determines the detection position for being marked on previous frame and target in the affine transformation square of the predicted position of present frame by following formula Battle array H:
[x′,y′,1]T=H* [x, y, 1]T
Wherein: h '0,h′1,h′2With h '3To scale twiddle factor, Δ x and Δ y are respectively detection position phase of the target in previous frame Predicted position offset in the x-direction and the z-direction to target in present frame, [x ', y ', 1]T[x, y, 1]TRespectively appoint The homogeneous coordinates of predicted position of a pair of of SIFT key point in target in the detection position and target of previous frame in present frame;
S55, affine transformation matrix H and target are obtained in conjunction with S54 on four vertex of the boundary rectangle of the position of previous frame, respectively Target is obtained on four vertex of the boundary rectangle of present frame, and then obtains the width and height of boundary rectangle;
S56, target is obtained according to S55 in the width and height of the boundary rectangle of present frame, predicted position of the combining target in present frame Boundary rectangle centre coordinate, so that it is determined that target is in the position of present frame;
S56, using target present frame location updating Kalman filter.
CN201811097244.0A 2018-09-20 2018-09-20 Image target tracking method based on YOLO Active CN109118523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811097244.0A CN109118523B (en) 2018-09-20 2018-09-20 Image target tracking method based on YOLO

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811097244.0A CN109118523B (en) 2018-09-20 2018-09-20 Image target tracking method based on YOLO

Publications (2)

Publication Number Publication Date
CN109118523A true CN109118523A (en) 2019-01-01
CN109118523B CN109118523B (en) 2022-04-22

Family

ID=64859818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811097244.0A Active CN109118523B (en) 2018-09-20 2018-09-20 Image target tracking method based on YOLO

Country Status (1)

Country Link
CN (1) CN109118523B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871763A (en) * 2019-01-16 2019-06-11 清华大学 A kind of specific objective tracking based on YOLO
CN109903312A (en) * 2019-01-25 2019-06-18 北京工业大学 A kind of football sportsman based on video multi-target tracking runs distance statistics method
CN109919999A (en) * 2019-01-31 2019-06-21 深兰科技(上海)有限公司 A kind of method and device of target position detection
CN110059554A (en) * 2019-03-13 2019-07-26 重庆邮电大学 A kind of multiple branch circuit object detection method based on traffic scene
CN110084831A (en) * 2019-04-23 2019-08-02 江南大学 Based on the more Bernoulli Jacob's video multi-target detecting and tracking methods of YOLOv3
CN110287778A (en) * 2019-05-15 2019-09-27 北京旷视科技有限公司 A kind of processing method of image, device, terminal and storage medium
CN110288632A (en) * 2019-05-15 2019-09-27 北京旷视科技有限公司 A kind of image processing method, device, terminal and storage medium
CN110472594A (en) * 2019-08-20 2019-11-19 腾讯科技(深圳)有限公司 Method for tracking target, information insertion method and equipment
CN111696135A (en) * 2020-06-05 2020-09-22 深兰人工智能芯片研究院(江苏)有限公司 Intersection ratio-based forbidden parking detection method
CN111738063A (en) * 2020-05-08 2020-10-02 华南理工大学 Ship target tracking method, system, computer equipment and storage medium
CN111891061A (en) * 2020-07-09 2020-11-06 广州亚美智造科技有限公司 Vehicle collision detection method and device and computer equipment
CN111986229A (en) * 2019-05-22 2020-11-24 阿里巴巴集团控股有限公司 Video target detection method, device and computer system
US10867380B1 (en) 2019-07-01 2020-12-15 Sas Institute Inc. Object and data point tracking to control system in operation
US11055861B2 (en) 2019-07-01 2021-07-06 Sas Institute Inc. Discrete event simulation with sequential decision making
CN113489897A (en) * 2021-06-28 2021-10-08 杭州逗酷软件科技有限公司 Image processing method and related device
CN115861940A (en) * 2023-02-24 2023-03-28 珠海金智维信息科技有限公司 Working scene behavior evaluation method and system based on human body tracking and recognition technology
WO2023184197A1 (en) * 2022-03-30 2023-10-05 京东方科技集团股份有限公司 Target tracking method and apparatus, system, and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789645A (en) * 2012-06-21 2012-11-21 武汉烽火众智数字技术有限责任公司 Multi-objective fast tracking method for perimeter precaution
CN103927764A (en) * 2014-04-29 2014-07-16 重庆大学 Vehicle tracking method combining target information and motion estimation
CN104008371A (en) * 2014-05-22 2014-08-27 南京邮电大学 Regional suspicious target tracking and recognizing method based on multiple cameras
CN105335986A (en) * 2015-09-10 2016-02-17 西安电子科技大学 Characteristic matching and MeanShift algorithm-based target tracking method
CN106204649A (en) * 2016-07-05 2016-12-07 西安电子科技大学 A kind of method for tracking target based on TLD algorithm
CN107403440A (en) * 2016-05-18 2017-11-28 株式会社理光 For the method and apparatus for the posture for determining object
CN107766821A (en) * 2017-10-23 2018-03-06 江苏鸿信系统集成有限公司 All the period of time vehicle detecting and tracking method and system in video based on Kalman filtering and deep learning
CN108053427A (en) * 2017-10-31 2018-05-18 深圳大学 A kind of modified multi-object tracking method, system and device based on KCF and Kalman
CN108108697A (en) * 2017-12-25 2018-06-01 中国电子科技集团公司第五十四研究所 A kind of real-time UAV Video object detecting and tracking method
CN108509897A (en) * 2018-03-29 2018-09-07 同济大学 A kind of human posture recognition method and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789645A (en) * 2012-06-21 2012-11-21 武汉烽火众智数字技术有限责任公司 Multi-objective fast tracking method for perimeter precaution
CN103927764A (en) * 2014-04-29 2014-07-16 重庆大学 Vehicle tracking method combining target information and motion estimation
CN104008371A (en) * 2014-05-22 2014-08-27 南京邮电大学 Regional suspicious target tracking and recognizing method based on multiple cameras
CN105335986A (en) * 2015-09-10 2016-02-17 西安电子科技大学 Characteristic matching and MeanShift algorithm-based target tracking method
CN107403440A (en) * 2016-05-18 2017-11-28 株式会社理光 For the method and apparatus for the posture for determining object
CN106204649A (en) * 2016-07-05 2016-12-07 西安电子科技大学 A kind of method for tracking target based on TLD algorithm
CN107766821A (en) * 2017-10-23 2018-03-06 江苏鸿信系统集成有限公司 All the period of time vehicle detecting and tracking method and system in video based on Kalman filtering and deep learning
CN108053427A (en) * 2017-10-31 2018-05-18 深圳大学 A kind of modified multi-object tracking method, system and device based on KCF and Kalman
CN108108697A (en) * 2017-12-25 2018-06-01 中国电子科技集团公司第五十四研究所 A kind of real-time UAV Video object detecting and tracking method
CN108509897A (en) * 2018-03-29 2018-09-07 同济大学 A kind of human posture recognition method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ALEX等: "Simple online and realtime tracking", 《ARXIV》 *
NICOLAI等: "Simple online and realtime tracking with a deep association metric", 《ARXIV》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871763A (en) * 2019-01-16 2019-06-11 清华大学 A kind of specific objective tracking based on YOLO
CN109903312A (en) * 2019-01-25 2019-06-18 北京工业大学 A kind of football sportsman based on video multi-target tracking runs distance statistics method
CN109919999A (en) * 2019-01-31 2019-06-21 深兰科技(上海)有限公司 A kind of method and device of target position detection
CN109919999B (en) * 2019-01-31 2021-06-11 深兰科技(上海)有限公司 Target position detection method and device
CN110059554A (en) * 2019-03-13 2019-07-26 重庆邮电大学 A kind of multiple branch circuit object detection method based on traffic scene
CN110059554B (en) * 2019-03-13 2022-07-01 重庆邮电大学 Multi-branch target detection method based on traffic scene
CN110084831A (en) * 2019-04-23 2019-08-02 江南大学 Based on the more Bernoulli Jacob's video multi-target detecting and tracking methods of YOLOv3
CN110084831B (en) * 2019-04-23 2021-08-24 江南大学 Multi-target detection tracking method based on YOLOv3 multi-Bernoulli video
CN110287778A (en) * 2019-05-15 2019-09-27 北京旷视科技有限公司 A kind of processing method of image, device, terminal and storage medium
CN110288632A (en) * 2019-05-15 2019-09-27 北京旷视科技有限公司 A kind of image processing method, device, terminal and storage medium
CN110287778B (en) * 2019-05-15 2021-09-10 北京旷视科技有限公司 Image processing method and device, terminal and storage medium
CN111986229A (en) * 2019-05-22 2020-11-24 阿里巴巴集团控股有限公司 Video target detection method, device and computer system
US10867380B1 (en) 2019-07-01 2020-12-15 Sas Institute Inc. Object and data point tracking to control system in operation
US11176692B2 (en) 2019-07-01 2021-11-16 Sas Institute Inc. Real-time concealed object tracking
US11055861B2 (en) 2019-07-01 2021-07-06 Sas Institute Inc. Discrete event simulation with sequential decision making
US11176691B2 (en) 2019-07-01 2021-11-16 Sas Institute Inc. Real-time spatial and group monitoring and optimization
CN110472594B (en) * 2019-08-20 2022-12-06 腾讯科技(深圳)有限公司 Target tracking method, information insertion method and equipment
CN110472594A (en) * 2019-08-20 2019-11-19 腾讯科技(深圳)有限公司 Method for tracking target, information insertion method and equipment
CN111738063A (en) * 2020-05-08 2020-10-02 华南理工大学 Ship target tracking method, system, computer equipment and storage medium
CN111738063B (en) * 2020-05-08 2023-04-18 华南理工大学 Ship target tracking method, system, computer equipment and storage medium
CN111696135A (en) * 2020-06-05 2020-09-22 深兰人工智能芯片研究院(江苏)有限公司 Intersection ratio-based forbidden parking detection method
CN111891061B (en) * 2020-07-09 2021-07-30 广州亚美智造科技有限公司 Vehicle collision detection method and device and computer equipment
CN111891061A (en) * 2020-07-09 2020-11-06 广州亚美智造科技有限公司 Vehicle collision detection method and device and computer equipment
CN113489897A (en) * 2021-06-28 2021-10-08 杭州逗酷软件科技有限公司 Image processing method and related device
WO2023184197A1 (en) * 2022-03-30 2023-10-05 京东方科技集团股份有限公司 Target tracking method and apparatus, system, and storage medium
CN115861940A (en) * 2023-02-24 2023-03-28 珠海金智维信息科技有限公司 Working scene behavior evaluation method and system based on human body tracking and recognition technology

Also Published As

Publication number Publication date
CN109118523B (en) 2022-04-22

Similar Documents

Publication Publication Date Title
CN109118523A (en) A kind of tracking image target method based on YOLO
CN110427905B (en) Pedestrian tracking method, device and terminal
Nakhmani et al. A new distance measure based on generalized image normalized cross-correlation for robust video tracking and image recognition
CN105405154B (en) Target object tracking based on color-structure feature
CN108960211B (en) Multi-target human body posture detection method and system
CN109685827B (en) Target detection and tracking method based on DSP
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
Zhou et al. A robust object tracking algorithm based on SURF
CN113608663B (en) Fingertip tracking method based on deep learning and K-curvature method
Joshi et al. A low cost and computationally efficient approach for occlusion handling in video surveillance systems
Cho et al. Distance-based camera network topology inference for person re-identification
Xu et al. A real-time, continuous pedestrian tracking and positioning method with multiple coordinated overhead-view cameras
CN103996207A (en) Object tracking method
Wang et al. Multi-features visual odometry for indoor mapping of UAV
Zhang et al. Target tracking for mobile robot platforms via object matching and background anti-matching
CN111882594A (en) ORB feature point-based polarization image rapid registration method and device
Zhe et al. A robust lane detection method in the different scenarios
CN107122714B (en) Real-time pedestrian detection method based on edge constraint
CN108694348B (en) Tracking registration method and device based on natural features
CN106558065A (en) The real-time vision tracking to target is realized based on color of image and texture analysiss
Xiong et al. Crowd density estimation based on image potential energy model
Gao et al. Image matching method based on multi-scale corner detection
Li et al. Pedestrian target tracking algorithm based on improved RANSAC and KCF
Mtir et al. Aerial sequence registration for vehicle detection
CN107146244B (en) Method for registering images based on PBIL algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant