CN113298848A - Object tracking method integrating instance segmentation and Camshift - Google Patents
Object tracking method integrating instance segmentation and Camshift Download PDFInfo
- Publication number
- CN113298848A CN113298848A CN202110626999.0A CN202110626999A CN113298848A CN 113298848 A CN113298848 A CN 113298848A CN 202110626999 A CN202110626999 A CN 202110626999A CN 113298848 A CN113298848 A CN 113298848A
- Authority
- CN
- China
- Prior art keywords
- camshift
- tracking
- frame
- segmentation
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 26
- 238000000034 method Methods 0.000 title claims abstract description 15
- 230000001502 supplementing effect Effects 0.000 claims description 3
- 238000000638 solvent extraction Methods 0.000 claims description 2
- 230000003044 adaptive effect Effects 0.000 claims 1
- 239000003086 colorant Substances 0.000 abstract description 2
- 230000007547 defect Effects 0.000 abstract 1
- 239000013589 supplement Substances 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an object tracking method fusing instance segmentation and Camshift, which can improve the accuracy of the Camshift tracking of objects, and the idea is to supplement a Camshift tracking result with an instance segmentation result. According to the algorithm, after a target is found by example segmentation, the target is tracked by using Camshift, when identification failure occurs, the result tracked by the Camshift is used for replacing an identified object, and the capability of the identified object is maintained when the example segmentation identification fails by using inter-frame contact information contained in the Camshift. For improving the tracking effect of the Camshift, the Camshift target search frame is updated by using the object frame identified by example segmentation, the defects that the Camshift is easily interfered by background colors and the like are overcome, and the robustness of object tracking is improved to meet different applications in various scenes.
Description
Technical Field
The invention belongs to the technical field of object tracking, and particularly relates to an object tracking method fusing instance segmentation and Camshift.
Background
Target tracking has been an important research direction for machine vision and is a very challenging research field. Tracking algorithms can be divided into four broad categories: model-based computation, active contour-based, mean shift-based, and feature matching-based. The Camshift algorithm based on the mean shift class is gradually a tracking algorithm which is concerned with due to outstanding real-time performance and adaptability to target size change. However, the Camshift algorithm has certain limitations: extracting the target color feature points is incomplete; meanwhile, only the color characteristics are utilized for tracking, and when the background color is too similar to the target color, the tracking accuracy is reduced, and even the target is lost; after the target is lost, a computational mechanism to relocate the target is lacking.
Disclosure of Invention
In order to solve the problems, the invention discloses an object tracking method fusing example segmentation and Camshift, so that the accuracy of tracking a moving object can be increased.
In order to achieve the purpose, the technical scheme of the invention is as follows:
an object tracking method fusing instance segmentation and Camshift is characterized by comprising the following steps:
(1) only using YOLACT to identify the object in the first frame, and simultaneously initializing Camshift to track the object;
(2) judging whether the target frame and the tracking frame meet the conditions of successful segmentation or successful tracking by using the contact ratio theta in each next frame;
(3) and outputting a target tracking result.
In the step (1), only using Yolcat to identify the object in the first frame, and simultaneously initializing Camshift to track the object, the specific steps are as follows:
(1.1) training a Yolact instance partitioning network with a COCO dataset;
(1.2) identifying the object to be segmented by using the Yolact, and marking each object with a label;
(1.3) tracking each segmented object using Camshift;
in the step (2), whether the target frame and the tracking frame meet the conditions of successful segmentation or successful tracking is judged by using the contact ratio in each next frame, and the specific steps are as follows:
(2.1) calculating object area C tracked by CamshiftareaAnd Yolact divides the object region YareaA zone overlap ratio θ of (a), wherein:
(2.2) if the contact ratio theta exceeds a set threshold epsilon, the Camshift tracking is proved to be successful;
(2.3) if the value is less than the set threshold value epsilon, the Camshift tracking is proved to fail.
In the step (3), a target tracking result is output, and the specific steps are as follows:
(3.1) supplementing the yolcat results with a tracking frame of CamShift when the segmentation fails;
(3.2) adding a new Camshift tracking target when the tracking fails or a new identified object appears;
(3.3) output the result of YOLACT when both are successful.
The invention has the beneficial effects that:
aiming at the condition that tracking fails due to interference of background colors and the like when an object is tracked by the current Camshift algorithm, the invention provides a method for fusing example segmentation. The object is first identified using instance segmentation, then the object is tracked using the Camshfit algorithm, and the target frame is updated with the results of the instance segmentation in each frame, thereby enabling accurate tracking of the object.
Drawings
FIG. 1 is a flow chart of an object tracking method incorporating example segmentation and Camshift;
FIG. 2 is a flowchart of the Camshift algorithm.
Detailed Description
The present invention will be further illustrated with reference to the accompanying drawings and specific embodiments, which are to be understood as merely illustrative of the invention and not as limiting the scope of the invention.
FIG. 1 is a schematic flow chart of the method of the present invention.
Step S1: only yolcat is used to identify objects in the first frame while the CamShift is initialized to track objects. The method specifically comprises the following steps:
s1.1, setting reasonable learning rate, single training sample number and other super parameters, and segmenting a network training CoCo data set through a Yolact example to obtain a training model;
s1.2, identifying and segmenting objects by using Yolact, and marking each object with a label;
s1.3, tracking each segmented object by using a Camshift algorithm: determining a Yolact segmented object region, calculating a reverse projection diagram of the region, carrying out Mean-Shift algorithm iteration according to the reverse projection diagram to obtain the central position of the segmented region, and entering the next frame for tracking, wherein a specific Camshift algorithm flow chart is shown in FIG. 2;
the Camshift algorithm flow comprises the following steps:
s1.3.1, inputting a sequence of images;
s1.3.2, judging whether the first frame belongs to the first frame, if not, converting to HSV color space, taking Hue component, if so, selecting a tracking target, calculating a target Hue component distribution histogram, obtaining a color probability distribution diagram through back projection, updating a search box, searching the centroid of the search box through the size and the dimension of the search box, and then, entering the step S1.3.5;
s1.3.3, calculating a color probability distribution map;
s1.3.4, finding the center of mass of the search window;
s1.3.5, judging whether the search window is convergent, if the centroid of the search window is convergent, finding the target, returning the central position and size of the search window, if the centroid of the search window is not convergent, returning to step S1.3.3;
s1.3.6, judging whether the target belongs to the last frame, if so, ending the tracking, otherwise, continuing to return to step S1.3.2;
step S2: judging whether the target frame and the tracking frame meet the conditions of successful segmentation or successful tracking by using the contact ratio theta in each next frame, wherein the conditions specifically comprise the following steps:
s2.1, calculating an object area C tracked by the Camshift of the current frameareaAnd Yolact divides the object region YareaA zone overlap ratio θ of (a), wherein:
s2.2, if the contact ratio theta exceeds a set threshold epsilon, the Camshift tracking is proved to be successful;
s2.3, if the value is smaller than the set threshold epsilon, the Camshift tracking is proved to fail.
Step S3: and outputting the result. The method comprises the following specific steps:
s3.1, supplementing a YOLACT result by using a tracking frame of Camshift when the segmentation fails;
s3.2, adding a new Camshift tracking target when the tracking fails or a new identification object appears;
s3.3, when both are successful, a YOLACT result is output.
Claims (4)
1. An object tracking method fusing instance segmentation and Camshift is characterized by comprising the following steps:
(1) only using YOLACT to identify the object in the first frame, and simultaneously initializing Camshift (continuous adaptive Mean-Shift algorithm) to track the object;
(2) judging whether the target frame and the tracking frame meet the conditions of successful segmentation or successful tracking by using the contact ratio theta in each frame;
(3) and outputting a target tracking result.
2. The fused instance segmentation and Camshift object tracking method according to claim 1, wherein: in the step (1), only yolcat is used for identifying the object in the first frame, and Camshift is initialized to track the object, and the specific steps are as follows:
(1.1) training a Yolact instance partitioning network with a COCO dataset;
(1.2) identifying the object to be segmented by using the Yolact, and marking each object with a label;
(1.3) tracking each segmented object using the Camshift algorithm.
3. The method for fusing instance segmentation and Camshift object tracking according to claim 1, wherein in the step (2), the coincidence degree is used to determine whether the target frame and the tracking frame satisfy the condition of successful segmentation or successful tracking in each frame, and the specific steps are as follows:
(2.1) calculating object area G tracked by CamshiftareaAnd Yolact divides the object region YareaWherein θ is represented as follows:
(2.2) if the contact ratio theta exceeds a set threshold epsilon, the Camshift tracking is proved to be successful;
(2.3) if the value is less than the set threshold value epsilon, the Camshift tracking is proved to fail.
4. The fused instance segmentation and Camshift object tracking method according to claim 1, wherein: in the step (3), a target tracking result is output, and the specific steps are as follows:
(3.1) supplementing the yolcat results with a tracking frame of CamShift when the segmentation fails;
(3.2) adding a new Camshift tracking target when the tracking fails or a new identified object appears;
(3.3) output the result of YOLACT when both are successful.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110626999.0A CN113298848A (en) | 2021-06-04 | 2021-06-04 | Object tracking method integrating instance segmentation and Camshift |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110626999.0A CN113298848A (en) | 2021-06-04 | 2021-06-04 | Object tracking method integrating instance segmentation and Camshift |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113298848A true CN113298848A (en) | 2021-08-24 |
Family
ID=77327222
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110626999.0A Pending CN113298848A (en) | 2021-06-04 | 2021-06-04 | Object tracking method integrating instance segmentation and Camshift |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113298848A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018121286A1 (en) * | 2016-12-30 | 2018-07-05 | 纳恩博(北京)科技有限公司 | Target tracking method and device |
CN109816692A (en) * | 2019-01-11 | 2019-05-28 | 南京理工大学 | A kind of motion target tracking method based on Camshift algorithm |
-
2021
- 2021-06-04 CN CN202110626999.0A patent/CN113298848A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018121286A1 (en) * | 2016-12-30 | 2018-07-05 | 纳恩博(北京)科技有限公司 | Target tracking method and device |
CN109816692A (en) * | 2019-01-11 | 2019-05-28 | 南京理工大学 | A kind of motion target tracking method based on Camshift algorithm |
Non-Patent Citations (1)
Title |
---|
韩鹏等: "联合YOLO和Camshift的目标跟踪算法研究", 计算机系统应用, vol. 28, no. 9, 5 September 2019 (2019-09-05), pages 271 - 277 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kwak et al. | Unsupervised object discovery and tracking in video collections | |
CN108304798B (en) | Street level order event video detection method based on deep learning and motion consistency | |
CN104200495B (en) | A kind of multi-object tracking method in video monitoring | |
CN110543911A (en) | weak supervision target segmentation method combined with classification task | |
WO2023065395A1 (en) | Work vehicle detection and tracking method and system | |
CN105335986A (en) | Characteristic matching and MeanShift algorithm-based target tracking method | |
CN109740537B (en) | Method and system for accurately marking attributes of pedestrian images in crowd video images | |
CN113506318B (en) | Three-dimensional target perception method under vehicle-mounted edge scene | |
CN111340855A (en) | Road moving target detection method based on track prediction | |
CN111882586B (en) | Multi-actor target tracking method oriented to theater environment | |
CN105160649A (en) | Multi-target tracking method and system based on kernel function unsupervised clustering | |
CN111340881B (en) | Direct method visual positioning method based on semantic segmentation in dynamic scene | |
CN112561960B (en) | Multi-target tracking repositioning method based on track similarity measurement learning | |
CN112819840B (en) | High-precision image instance segmentation method integrating deep learning and traditional processing | |
CN110032952B (en) | Road boundary point detection method based on deep learning | |
CN108509950B (en) | Railway contact net support number plate detection and identification method based on probability feature weighted fusion | |
CN113362341B (en) | Air-ground infrared target tracking data set labeling method based on super-pixel structure constraint | |
CN110400347B (en) | Target tracking method for judging occlusion and target relocation | |
Doulamis | Coupled multi-object tracking and labeling for vehicle trajectory estimation and matching | |
CN113902991A (en) | Twin network target tracking method based on cascade characteristic fusion | |
CN110782487A (en) | Target tracking method based on improved particle filter algorithm | |
CN111445497A (en) | Target tracking and following method based on scale context regression | |
Gad et al. | Real-time lane instance segmentation using segnet and image processing | |
CN109508674B (en) | Airborne downward-looking heterogeneous image matching method based on region division | |
Li et al. | Loop closure detection based on image semantic segmentation in indoor environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |