CN113763427A - Multi-target tracking method based on coarse-fine shielding processing - Google Patents

Multi-target tracking method based on coarse-fine shielding processing Download PDF

Info

Publication number
CN113763427A
CN113763427A CN202111035065.6A CN202111035065A CN113763427A CN 113763427 A CN113763427 A CN 113763427A CN 202111035065 A CN202111035065 A CN 202111035065A CN 113763427 A CN113763427 A CN 113763427A
Authority
CN
China
Prior art keywords
pedestrian
target
model
occlusion
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111035065.6A
Other languages
Chinese (zh)
Other versions
CN113763427B (en
Inventor
路小波
张帅帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202111035065.6A priority Critical patent/CN113763427B/en
Publication of CN113763427A publication Critical patent/CN113763427A/en
Application granted granted Critical
Publication of CN113763427B publication Critical patent/CN113763427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-target tracking method based on rough to fine shielding processing, and model construction comprises the following steps: firstly, adding an occlusion fraction prediction on the basis of a JDE model prediction head so as to complete the complete processing of a non-occluded target and the rough processing of an occluded target; on the basis, training by taking the area of the shielded pedestrians subjected to mapping and cutting as a training set of a second-step model, and completing accurate detection and apparent feature vector extraction of the shielded pedestrians; thereby realizing the fine processing of the shielding target; and integrating the output results of the two-step model, and completing the tracking of the pedestrian by using a data association algorithm. The invention solves the problem that the pedestrian under the scene with the shielding condition cannot be accurately tracked in the prior art, and can be well adapted to the public environment with various time periods and various pedestrian densities; the pedestrian tracking method has a good effect on tracking pedestrians.

Description

Multi-target tracking method based on coarse-fine shielding processing
Technical Field
The invention belongs to the field of computer vision and surveillance video analysis, and particularly relates to a multi-target tracking method based on coarse-fine shielding processing.
Background
Multi-target tracking is an important component of surveillance video analysis. The method can be directly used for analyzing the motion trail of the object, and can be used as the research basis of high-level tasks such as object motion recognition, behavior analysis and the like.
In order to complete a multi-target tracking task, a strategy for tracking based on detection is provided by a plurality of mainstream deep learning algorithms. These methods divide multi-target tracking into detection modules and embedding modules. The detection module completes target detection, and the embedding module extracts the characteristics of the target by using a correlation algorithm, however, repeated calculation may occur between the two modules, and the operation speed is affected. To this end, some scientists have proposed a method of integrating a detection module and an embedded module into a neural network, where the two modules share the same underlying characteristics, thereby avoiding duplicate computations and improving performance. However, due to the limitation of the self-detection framework, these methods do not perform well for detecting and tracking the occluded object in some scenes. In particular, the detection framework often detects two occluded targets as one target, which also presents problems for target tracking. Aiming at the problem of occlusion target detection, some improved non-maximum suppression algorithms are proposed, such as soft-NMS, adaptive-NMS and the like, and some improved loss functions are proposed, such as repulsion loss and the like.
Disclosure of Invention
In order to solve the problems, the invention discloses a multi-target tracking method based on rough to fine shielding processing, which separates the processing of a non-shielding target and a shielding target, and focuses on the processing of the non-shielding target and positions the shielding target in a first-step model to realize the rough processing of the shielding target; the second step model realizes the fine processing of the shielded target, thereby realizing the improvement of the tracking performance of the pedestrians in some shielding environments.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a multi-target tracking method based on coarse-fine occlusion processing comprises the following steps:
step one, labeling a data set obtained from a public place, constructing a training set and a test set for pedestrian tracking,
step two, training the first step model of the multi-target tracking method by using part of information marked by the training set to realize the detection of the non-occluded pedestrian target, the extraction of the corresponding apparent characteristic vector and the positioning of the occluded pedestrian area, thereby completing the complete processing of the non-occluded target and the rough processing of the occluded target,
and step three, on the basis of the completion of the training of the model in the first step, mapping the region of the shielded pedestrian, cutting the region to be used as a training set of the model in the second step for training, and completing the detection and the apparent feature vector extraction of the shielded pedestrian. Thereby realizing the fine processing of the shielding target,
and step four, integrating the results output by the two-step model, and completing the tracking of the pedestrian by using a data association algorithm.
Further, in the first step, a pedestrian tracking data set is constructed on a surveillance video shot by a plurality of cameras in a public place, and the labeled content includes a pedestrian boundary frame id (marked as 0 when no shielding condition exists, and the id is increased progressively when 2 persons are shielded mutually), a pedestrian id (marked as-1 when shielding conditions exist, or increased progressively from 0 when shielding conditions do not exist), a boundary frame position, whether a pedestrian exists shielding in the boundary frame (a shielding mark 1 and a non-shielding mark 0), and in which boundary frame a pedestrian target is included (marked as 0 when the boundary frame has shielding conditions, and marked as a shielding boundary frame id including the pedestrian target when the boundary frame does not exist);
further, in the second step, the first step model of the method defines two kinds of boundary frame regression, one is boundary frame regression under the condition that a single pedestrian is not shielded, and the other is boundary frame regression under the condition that two pedestrians are shielded; on the basis of the JDE model, the first-step model adds shielding fraction prediction in a prediction head, so that whether the pedestrians in the regressed boundary frame are shielded or not can be judged; if the occlusion does not exist, the position and the apparent characteristic vector of the pedestrian can be simultaneously extracted, and if the occlusion exists, the occlusion area is positioned, and the rough treatment of the occlusion of the pedestrian is completed.
Further, in the third step, the occlusion region located in the second step is mapped into a small-scale feature map of the first step model and cut by using an ROI Align algorithm as an input of the second step model, the second step model performs fine processing on the feature map of the occlusion region to obtain a position and an apparent feature vector of an occluded pedestrian, wherein when a boundary frame of the occluded pedestrian is trained, a loss function is calculated as follows by combining SmoothL1 loss with RepGT and RepBox loss weighting:
Figure BDA0003246821340000021
wherein the content of the first and second substances,
Figure BDA0003246821340000022
represents the SmoothL1 loss function, LRepGTRepresenting RepGT loss function, LRepBoxRepresenting the RepBox loss function, and alpha, beta represent the weighting parameters.
Further, in the fourth step, the results in the first step and the second step are integrated to obtain the positions and apparent feature vectors of all pedestrian targets in the current frame, the matching of the pedestrian targets is completed by using the characteristics that the same pedestrian target is similar to the apparent feature vector and the position of the same pedestrian target does not change much between adjacent frames, and finally the tracking of multiple targets of pedestrians is completed.
The invention has the beneficial effects that:
1) the invention creatively provides a rough-to-fine shielding processing method, which comprises the steps of finishing rough processing of a shielding target and full processing of a non-shielding target in the first step, finishing fine processing of the shielding target in the second step, and finally synthesizing the results of the two steps to obtain the final pedestrian position and the corresponding apparent feature vector.
2) According to the first-step model, on the basis of the JDE model, shielding fraction prediction is added in the prediction head, so that whether shielding conditions exist in the boundary frame output by the first-step model can be judged, if the shielding conditions exist, the shielding boundary frame can be positioned, and the rough processing of a shielding target is realized.
3) The method maps the boundary box of the blocked pedestrian positioned by the first-step model to the small-scale feature map of the first-step model (the small-scale feature map retains more information) and cuts the boundary box by adopting an ROI Align algorithm to be used as the input of a blocking fine processing model (a second-step model).
4) The invention designs a network structure of an occlusion fine processing model (a second step model), and adopts a mode of combining SmoothL1 loss and RepGT, RepBox loss weighting as a loss function of occlusion fine processing model bounding box regression.
5) Aiming at a two-step model architecture, the invention provides a two-step model training method, wherein in the first step, model parameters in the second step are frozen, and only a model in the first step is trained; and in the second stage, on the basis of the completion of the training of the first-step model, freezing the parameters of the first-step model, and training by taking the feature map corresponding to the pedestrian-sheltering bounding box positioned by the first-step model as the input of the second-step model. The problem that pedestrians under the scene with the shielding condition cannot be accurately tracked in the prior art is solved, and the method is well suitable for public environments with various time periods and various pedestrian densities; the pedestrian tracking method has a good effect on tracking pedestrians.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a scene diagram under a subway monitoring video.
FIG. 3 is a multi-target tracking network model framework diagram based on coarse to fine occlusion processing.
FIG. 4 is a diagram of a first step model prediction head and loss function.
FIG. 5 is a diagram of a network structure of an occlusion refinement model (second step model).
Fig. 6 is a multi-target tracking effect display diagram.
Detailed Description
The present invention will be further illustrated with reference to the accompanying drawings and specific embodiments, which are to be understood as merely illustrative of the invention and not as limiting the scope of the invention.
Examples
The surveillance video used for the model training of the embodiment is the surveillance video in the actual subway scene, and the scene graph is shown in fig. 2.
In this embodiment, the subway station monitoring videos shown in fig. 2 are taken as an example, and these video images include pedestrians in the absence of occlusion and pedestrians in the presence of occlusion. After the video under the subway scene is obtained, the pedestrians in the video are marked, and therefore a subway pedestrian multi-target tracking data set is obtained.
The example provides a multi-target tracking method based on rough to fine occlusion processing, which performs rough to fine model processing on an occlusion target in an input image, and on the basis, achieves a better multi-target tracking effect by combining an optimization loss function, an optimization NMS (network management system) and other modes, wherein a frame diagram of the model is shown in FIG. 3, and the specific steps are as follows:
1) image input processing: to eliminate the adverse effect of the input image size on model training and to account for factors such as resolution, each frame of image in the video is resized to 1088 x 608 before being input to the model.
2) Add occlusion score prediction: the first step is that on the basis of a JDE model, shielding fraction prediction is added in a prediction head, and the prediction head comprises four prediction values of confidence prediction, boundary frame regression prediction, shielding fraction prediction and apparent feature vectors; wherein, the training of confidence, occlusion fraction and apparent feature vector adopts cross entropy loss, and the training of bounding box regression adopts SmoothL1 loss. And the weighting parameters of the loss values of all parts in the loss function are determined in an adaptive training mode. The loss function calculation formula is as follows:
Figure BDA0003246821340000041
where M is the number of the probing tips,
Figure BDA0003246821340000042
respectively representing confidence loss, occlusion prediction loss, bounding box regression loss and apparent feature vector loss,
Figure BDA0003246821340000043
for the weighting parameter associated with each loss, a learnable parameter is modeled. The schematic diagram is shown in fig. 4:
3) mapping and clipping of a feature map: after the occlusion boundary box is positioned by the first-step model, the occlusion boundary box after non-maximum suppression is mapped to the small-scale feature map of the first-step model, the occlusion boundary box is cut by adopting a ROIAlign algorithm, and the cut feature map is used as the input of the second-step model.
4) And (3) finely processing the shielded target: the purpose of the occlusion refinement model (second step model) is to accurately obtain the bounding box and corresponding apparent feature vectors of the occluding pedestrian. The network structure is shown in fig. 5, the cut feature map has two branches after passing through a convolution block, one branch is used for confidence prediction and regression of a pedestrian boundary frame, and the other branch is used for extracting an apparent feature vector corresponding to an occluded pedestrian.
5) Occlusion of loss function of fine processing model: the confidence degree and the apparent feature vector of the occlusion refinement model are trained by adopting cross entropy loss, and the training of the bounding box regression adopts a mode of combining Smoothl1 loss and RepGT and RepBox loss weighting. Where the SmoothL1 penalty is aimed at making the bounding box prediction values as close to the true values as possible, the RepGT penalty is designed to make the predicted bounding box as far away from its neighboring true bounding box as possible. The purpose of the RepBox penalty is to keep the two bounding box predictors regressing to different bounding boxes as far apart as possible. The loss of the three boundary frames is mutually coordinated, and the three boundary frames have positive effects on the boundary frame of the regression blocking pedestrian. Meanwhile, the weighting mode of the regression loss, the confidence loss and the apparent feature vector loss of the bounding box is the same as that of the loss value of the model in the first step.
6) Integration and optimization: for the bounding box output by the second step model (shielding fine processing model), firstly removing some bounding boxes which do not meet the requirements, such as the bounding box with too small area and the coordinates exceeding the image area; and simultaneously, after the non-maximum value inhibition treatment is carried out on the boundary frame output by the second step of model by using the Soft-NMS algorithm, the non-maximum value inhibition treatment is integrated with the boundary frame output by the first step of model, and the Soft-NMS algorithm treatment is carried out again to obtain a final result.
7) Data association and trajectory generation: firstly, taking cosine distance between an apparent feature vector group of a detection target group and an apparent feature vector group of a tracking target group as a main matching principle, taking distance between a boundary box of the detection target group and a boundary box of the tracking target group predicted by a Kalman filter as an auxiliary matching principle, wherein the selected weighting ratios of the two are 0.95 and 0.05 respectively, and finishing first-step matching by adopting a Hungary matching algorithm; secondly, in the second step of matching, the intersection and union ratio (IOU) of the boundary box of the detection target group and the boundary box of the tracking target group is used as a matching principle, and the Hungarian algorithm is still adopted to complete matching. For the tracking target which is not matched with the previous tracking target mark as lost, the lost tracking target still participates in the matching of the next frame, when the lost tracking target is not matched with the new target for 25 continuous frames (the frame rate is taken as a threshold value here), the tracking target is removed from the tracking target group, and the tracking target does not participate in the matching of the tracking target and the detection target.
8) Two-stage model training: freezing the parameters of the second step model when training the first step model, and only training the first step model; after the first-step model training is finished, freezing parameters of the first-step model, and training by taking a feature map corresponding to the pedestrian-sheltering bounding box positioned by the first-step model as the input of the second-step model.
After the model is constructed and trained, the invention can realize the tracking of the pedestrians in the actual monitoring video, and the tracking effect is as shown in fig. 6, wherein the interval between two adjacent pictures is 25 frames.
The invention provides a multi-target tracking method based on coarse-fine shielding processing. The model is divided into two steps, wherein the first step of the model is used for realizing the full processing of the non-occluded target and the rough processing (positioning) of the occluded target; the second step model is used for accurately positioning the shielded pedestrians and extracting the apparent characteristic vectors of the shielded pedestrians, and therefore the shielded targets are accurately processed. And finally, integrating and optimizing the output results of the two models, and then performing data association and track generation to finally complete the multi-target tracking task. The invention has important effect on the monitoring video analysis task in the field of computer vision and has wider application prospect.
The present invention is not limited to the specific technical solutions described in the above embodiments, and other embodiments of the present invention are possible in addition to the above embodiments. It will be understood by those skilled in the art that various changes, substitutions of equivalents, and alterations can be made without departing from the spirit and scope of the invention.

Claims (5)

1. A multi-target tracking method based on coarse-fine occlusion processing is characterized by comprising the following steps:
marking a data set obtained from a public place, and constructing a training set and a test set for pedestrian tracking;
secondly, training a first-step model of the multi-target tracking method by using part of information labeled by the training set; the detection of non-occluded pedestrian targets, the extraction of corresponding apparent feature vectors and the positioning of an occluded pedestrian area are realized; thereby completing the complete processing of the non-occluded target and the rough processing of the occluded target;
thirdly, training the region of the shielded pedestrian after mapping and cutting as a training set of the model in the second step on the basis of the completion of the training of the model, and completing the accurate detection and the apparent feature vector extraction of the shielded pedestrian; thereby realizing the fine processing of the shielding target;
and step four, integrating the results output by the two-step model, and completing the tracking of the pedestrian by using a data association algorithm.
2. The multi-target tracking method based on the rough to fine occlusion processing according to claim 1, characterized in that: in the first step, a pedestrian tracking data set is constructed on a monitoring video shot by a plurality of cameras in a public place, the labeling content comprises a pedestrian boundary frame id, a pedestrian id, a boundary frame position, whether a pedestrian is shielded in the boundary frame or not, and in which boundary frame a pedestrian target is contained, and the information forms truth value labels in a training set and a test set.
3. The multi-target tracking method based on the rough to fine occlusion processing according to claim 1, characterized in that: in the second step, the model of the first step of the method defines two kinds of boundary frame regression, wherein one kind of boundary frame regression is the boundary frame regression under the condition that a single pedestrian is not shielded, and the other kind of boundary frame regression is the boundary frame regression under the condition that two pedestrians are shielded; on the basis of the JDE model, the first-step model adds shielding fraction prediction in a prediction head, so that whether the pedestrians in the regressed boundary frame are shielded or not can be judged; if the occlusion does not exist, the position and the apparent characteristic vector of the pedestrian can be simultaneously extracted, and if the occlusion exists, the occlusion area is positioned, and the rough treatment of the occlusion of the pedestrian is completed.
4. The multi-target tracking method based on the rough to fine occlusion processing according to claim 1, characterized in that: in the third step, the occlusion region located in the second step is mapped into a small-scale feature map of the first step model and is cut by using a ROIAlign algorithm to be used as the input of the second step model, the second step model carries out fine processing on the feature map of the occlusion region to obtain the position and the apparent feature vector of an occluded pedestrian, wherein when a bounding box of the occluded pedestrian is trained, a loss function is calculated as follows by adopting a mode of combining SmoothL1 loss with RepGT and RepBox loss weighting:
Figure FDA0003246821330000011
wherein the content of the first and second substances,
Figure FDA0003246821330000012
represents the SmoothL1 loss function, LRepGTRepresenting RepGT loss function, LRepBoxRepresenting the RepBox loss function, and alpha, beta represent the weighting parameters.
5. The multi-target tracking method based on the rough to fine occlusion processing according to claim 1, characterized in that: and in the fourth step, the results in the first step and the second step are integrated to obtain the positions and the apparent feature vectors of all the pedestrian targets in the current frame, the matching of the pedestrian targets is completed by utilizing the characteristics that the same pedestrian target is similar to the apparent feature vector and the position of the same pedestrian target is not changed greatly between adjacent frames, and finally the task is completed.
CN202111035065.6A 2021-09-05 2021-09-05 Multi-target tracking method based on coarse-to-fine shielding processing Active CN113763427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111035065.6A CN113763427B (en) 2021-09-05 2021-09-05 Multi-target tracking method based on coarse-to-fine shielding processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111035065.6A CN113763427B (en) 2021-09-05 2021-09-05 Multi-target tracking method based on coarse-to-fine shielding processing

Publications (2)

Publication Number Publication Date
CN113763427A true CN113763427A (en) 2021-12-07
CN113763427B CN113763427B (en) 2024-02-23

Family

ID=78792988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111035065.6A Active CN113763427B (en) 2021-09-05 2021-09-05 Multi-target tracking method based on coarse-to-fine shielding processing

Country Status (1)

Country Link
CN (1) CN113763427B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114120373A (en) * 2022-01-24 2022-03-01 苏州浪潮智能科技有限公司 Model training method, device, equipment and storage medium
CN116129432A (en) * 2023-04-12 2023-05-16 成都睿瞳科技有限责任公司 Multi-target tracking labeling method, system and storage medium based on image recognition
WO2023197232A1 (en) * 2022-04-14 2023-10-19 京东方科技集团股份有限公司 Target tracking method and apparatus, electronic device, and computer readable medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200065976A1 (en) * 2018-08-23 2020-02-27 Seoul National University R&Db Foundation Method and system for real-time target tracking based on deep learning
CN112836639A (en) * 2021-02-03 2021-05-25 江南大学 Pedestrian multi-target tracking video identification method based on improved YOLOv3 model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200065976A1 (en) * 2018-08-23 2020-02-27 Seoul National University R&Db Foundation Method and system for real-time target tracking based on deep learning
CN112836639A (en) * 2021-02-03 2021-05-25 江南大学 Pedestrian multi-target tracking video identification method based on improved YOLOv3 model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
任珈民;宫宁生;韩镇阳;: "基于YOLOv3与卡尔曼滤波的多目标跟踪算法", 计算机应用与软件, no. 05 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114120373A (en) * 2022-01-24 2022-03-01 苏州浪潮智能科技有限公司 Model training method, device, equipment and storage medium
WO2023197232A1 (en) * 2022-04-14 2023-10-19 京东方科技集团股份有限公司 Target tracking method and apparatus, electronic device, and computer readable medium
CN116129432A (en) * 2023-04-12 2023-05-16 成都睿瞳科技有限责任公司 Multi-target tracking labeling method, system and storage medium based on image recognition
CN116129432B (en) * 2023-04-12 2023-06-16 成都睿瞳科技有限责任公司 Multi-target tracking labeling method, system and storage medium based on image recognition

Also Published As

Publication number Publication date
CN113763427B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
US8045783B2 (en) Method for moving cell detection from temporal image sequence model estimation
Zhou et al. Efficient road detection and tracking for unmanned aerial vehicle
CN113763427B (en) Multi-target tracking method based on coarse-to-fine shielding processing
EP1836683B1 (en) Method for tracking moving object in video acquired of scene with camera
CN113506317B (en) Multi-target tracking method based on Mask R-CNN and apparent feature fusion
She et al. Vehicle tracking using on-line fusion of color and shape features
CN108304808A (en) A kind of monitor video method for checking object based on space time information Yu depth network
KR20010000107A (en) System tracking and watching multi moving object
CN103077539A (en) Moving object tracking method under complicated background and sheltering condition
Shaikh et al. Moving object detection approaches, challenges and object tracking
CN110321937B (en) Motion human body tracking method combining fast-RCNN with Kalman filtering
KR101901487B1 (en) Real-Time Object Tracking System and Method for in Lower Performance Video Devices
CN111161325A (en) Three-dimensional multi-target tracking method based on Kalman filtering and LSTM
Ghahremannezhad et al. A new adaptive bidirectional region-of-interest detection method for intelligent traffic video analysis
Roy et al. A comprehensive survey on computer vision based approaches for moving object detection
FAN et al. Robust lane detection and tracking based on machine vision
CN110322474B (en) Image moving target real-time detection method based on unmanned aerial vehicle platform
Davies et al. Using CART to segment road images
CN111986233A (en) Large-scene minimum target remote sensing video tracking method based on feature self-learning
Chanawangsa et al. A new color-based lane detection via Gaussian radial basis function networks
Lande et al. Moving object detection using foreground detection for video surveillance system
Zhou et al. An anti-occlusion tracking system for UAV imagery based on Discriminative Scale Space Tracker and Optical Flow
Lin et al. Background subtraction based on codebook model and texture feature
Jia et al. Research on Intelligent Monitoring Technology of Traffic Flow Based on Computer Vision
Pulare et al. Implementation of real time multiple object detection and classification of HEVC videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant