CN111626194B - Pedestrian multi-target tracking method using depth correlation measurement - Google Patents

Pedestrian multi-target tracking method using depth correlation measurement Download PDF

Info

Publication number
CN111626194B
CN111626194B CN202010457486.7A CN202010457486A CN111626194B CN 111626194 B CN111626194 B CN 111626194B CN 202010457486 A CN202010457486 A CN 202010457486A CN 111626194 B CN111626194 B CN 111626194B
Authority
CN
China
Prior art keywords
target
frame
pedestrian
data set
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010457486.7A
Other languages
Chinese (zh)
Other versions
CN111626194A (en
Inventor
杨海东
杨航
黄坤山
彭文瑜
林玉山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Foshan Guangdong University CNC Equipment Technology Development Co. Ltd
Original Assignee
Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Foshan Guangdong University CNC Equipment Technology Development Co. Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute, Foshan Guangdong University CNC Equipment Technology Development Co. Ltd filed Critical Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Priority to CN202010457486.7A priority Critical patent/CN111626194B/en
Publication of CN111626194A publication Critical patent/CN111626194A/en
Application granted granted Critical
Publication of CN111626194B publication Critical patent/CN111626194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a pedestrian multi-target tracking method using depth correlation measurement, which comprises the following steps of S1, training and fine tuning on a known pedestrian detection data set by using a pre-trained model to serve as a target detector; s2, building a feature extraction network, training on a pedestrian re-identification data set, and extracting target appearance information; s3, detecting each frame in the video by using a target detector, and carrying out track processing and state estimation on each target for extracting target motion information; s4, integrating the motion information and the appearance information of the target selected by the frame in each frame to match with the candidate frame of the previous frame, detecting by adopting a neural network, integrating the motion information and the appearance information while improving the accuracy, greatly improving the tracking effect of the blocked target, improving the matching precision, and simultaneously tracking the target by using cascade matching through a longer blocking period, thereby effectively reducing the frequency of identity switching and improving the robustness of the system.

Description

Pedestrian multi-target tracking method using depth correlation measurement
Technical Field
The invention relates to the technical field of multi-target tracking, in particular to a pedestrian multi-target tracking method using depth correlation measurement.
Background
The intelligent video monitoring technology is a research direction which is emerging in the field of computer vision in recent years, integrates advanced technologies in different fields such as image processing, pattern recognition, artificial intelligence, automatic control and the like, combines the computer vision with networked video monitoring, and realizes the functions of detecting, recognizing, tracking, analyzing behaviors and the like of moving targets in video. The pedestrian multi-target tracking is a difficulty in the field of intelligent video monitoring, because pedestrians have flexibility when moving than vehicles and are not rigid bodies when moving, the contour features are continuously changed and are not easy to extract, the problems are brought to the accuracy of tracking and the calculation complexity of an algorithm, and the pedestrian tracking has more commercial value in practical application.
Tracking By Detection the target detection algorithm is used to detect the target of interest in each frame to obtain corresponding indexes such as position coordinates, classification, credibility and the like, and it is assumed that the detected detection result in the previous step is related to the detected target in the previous frame one by one in some way, and in Tracking By Detection, the most critical is the detection algorithm and how to perform data association.
Due to the development and application of convolutional neural networks (CNN for short), many tasks in the field of computer vision have been greatly developed, and many target methods based on CNN are also applied to the problem of image resolution recognition. However, the common data association method is often not done, and in the invention, the appearance information is integrated, so that the object can be tracked in a longer blocking period, and the number of identity switching is effectively reduced.
Disclosure of Invention
Aiming at the problems, the invention provides a pedestrian multi-target tracking method using depth correlation measurement, which mainly solves the problems in the background technology.
The invention provides a pedestrian multi-target tracking method using a depth correlation measure, which comprises the following steps:
s1, training and fine-tuning on a known pedestrian detection data set by using a pre-trained model to serve as a target detector;
s2, building a feature extraction network, training on a pedestrian re-identification data set, and extracting target appearance information;
s3, detecting each frame in the video by using a target detector, and carrying out track processing and state estimation on each target for extracting target motion information;
and S4, integrating the motion information and the appearance information of the frame selection target in each frame to match with the candidate frame of the previous frame.
The further improvement is that the S1 specifically comprises:
s11, acquiring a pedestrian detection data set Caltech Pedestrian Detecion, and dividing the pedestrian detection data set into six equal parts randomly;
s12, training by using a model which is pre-trained on the ImageNet, adopting 6-fold cross-validation training on a pedestrian detection data set, and adjusting parameters as a target detector.
The further improvement is that the step S2 specifically comprises:
s21, acquiring a large-scale ReID data set, and dividing the data set into a training set, a testing set and a verification set according to a proportion;
s22, training is carried out on the training set, and finally 128-dimensional feature vectors are output;
s23, projecting the characteristic vector to an hypersphere after normalization.
The further improvement is that the step S3 specifically comprises:
s31, detecting each frame in the video by using a target detector, and marking pedestrians in the picture by using candidate frames;
s32, identifying and distinguishing each candidate frame by adopting different colors and IDs;
s33, using an 8-dimensional space to describe the state of the track at a certain moment, and then using Kalman filtering to predict and update the track;
s34, a counter is allocated to each track K, when the predefined maximum range A is exceeded max The tracks of (a) are deleted from the set of tracks and a new track hypothesis is launched for each detection that cannot be associated with an existing track.
The further improvement is that the step S4 specifically includes:
s41, using the Marshall distance as a measure of motion information, and if the associated Marshall distance is smaller than a specified threshold value, considering that the motion state association is successful;
s42, extracting target appearance information of each detection target by using the trained feature extraction network in the S2, and calculating a minimum cosine distance as a measure of the appearance information;
s43, using the linear weighting of the two measurement modes as a final measurement;
s44, using a cascade matching algorithm, distributing a tracker to each target detector, setting a time_sine_update parameter for each tracker, and sequencing the trackers according to the parameter;
s45, matching the unmatched tracks and the detection targets based on the IOU in the final stage of matching.
Compared with the prior art, the invention has the beneficial effects that: the method overcomes the defects of slow speed and frequent identity switching under the condition of object shielding in the traditional tracking method, can be applied to detection through a neural network in video monitoring scenes of areas with large human flow such as intersections, improves accuracy, integrates motion information and appearance information, greatly improves the tracking effect of a shielded target, improves matching precision, and simultaneously tracks objects through longer blocking period by using cascade matching, thereby effectively reducing the frequency of identity switching and improving the robustness of the system.
Drawings
The drawings are for illustrative purposes only and are not to be construed as limiting the present patent; for the purpose of better illustrating the embodiments, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the actual product dimensions; it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
FIG. 1 is a schematic overall flow chart of an embodiment of the present invention;
FIG. 2 is a schematic diagram of a pedestrian target recognition procedure according to an embodiment of the invention;
FIG. 3 is a schematic overall flow chart of a detection step according to an embodiment of the present invention.
Detailed Description
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted", "connected" and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected or integrally connected; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, so to speak, the two elements are communicated internally. It will be understood by those of ordinary skill in the art that the terms described above are in the specific sense of the present invention. The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
Example 1:
referring to fig. 1-3, a pedestrian multi-target tracking method using a depth correlation metric, the method comprising the steps of:
s1, training and fine-tuning on a known pedestrian data set by using a pre-trained model to serve as a target detector. The method comprises the following specific steps:
s11, acquiring a pedestrian detection data set Caltech Pedestrian Detecion, wherein 250000 frames, 350000 rectangular frames and 2300 pedestrians are marked.
S12, using a model pre-trained on the ImageNet, training by adopting 6-fold cross validation on a data set, randomly dividing the data set into 6 equal parts, selecting 5 of the data sets for training, and the other data set for testing, adjusting parameters, and repeating the steps for a plurality of times until the network converges.
S2, building a feature extraction network, training the feature extraction network by using a training set, and extracting target appearance information, wherein the specific steps comprise the following steps:
s21, acquiring a large-scale ReID data set MARS, dividing the MARS into a training set, a test set and a verification set according to the ratio of 6:2:2
S22, training a model on a training set, and obtaining the model parameters for extracting target appearance characteristics after iteration, wherein the network structure comprises 2 convolution layers, 6 residual blocks and 1 full connection layer, and 128-dimensional characteristic vectors are output
S23, projecting the characteristics to a uniform hypersphere by using L2 standardization
S3, detecting each frame in the video by using a detector, and carrying out track processing and state estimation on each target, wherein the specific steps are as follows:
s31, detecting each frame in the video by using a detector, and marking pedestrians in the picture by using candidate frames.
S32, identifying and distinguishing each candidate frame by adopting different colors and I D.
S33, describing the state of the track at a certain moment by using an 8-dimensional space, wherein the state respectively represents the position, the aspect ratio, the height and the corresponding speed information in the image coordinates of the center of the candidate frame. The update trajectory is then predicted using a kalman filter that uses a constant velocity model and a linear observation model. The observed variables are (x, y, γ, h).
S34, for each track k, we calculate the number of frames A since the last successful measurement association k This counter is incremented during the kalman filter prediction and reset to 0 at tracking time. Exceeding a predefined maximum range A max Is considered to have left the scene and is deleted from the set of trajectories. For each detection that cannot be associated with an existing track, new track hypotheses are launched, classified as tentave, if matching is successful in the next three consecutive frames, then the new track is considered to be generated, labeled as confirmed, otherwise the false track is considered, and the state is labeled as deleted.
S4, integrating motion information and appearance information of a frame selection target in each frame to match with a frame of a previous frame, wherein the method comprises the following specific steps of:
s41, performing association of motion information by using a Markov distance between a Kalman filtering prediction result and a detection result of a motion state of an existing moving object: wherein d is j Represents the position of the j-th detection frame, y i Representing the predicted position of the ith tracker on the target, y i Representing the predicted position of the ith tracker to the target, S i Representing the covariance matrix between the detected position and the average tracked position. The mahalanobis distance accounts for uncertainty in the state measurement by calculating the standard deviation between the detected position and the average tracked position. If the Mahalanobis distance of a certain correlation is less than a specified threshold t (1) Setting the association of the movement state to be successful, the function used is +>
S42, for each detection block d j Solving for a feature vector r using the network built in S2 i As appearance descriptors. And simultaneously constructing a gap for each tracking target, and storing the feature vector of the last 100 frames successfully associated with each tracking target. And then calculating the minimum cosine distance between the last 100 successfully-associated feature sets of the ith tracker and the feature vector of the current jth detection result as a similarity measure. The calculation formula is as follows: again, we introduce a binary variable to indicate whether or not the association is allowed according to this metric: />
S43, using linear weighting of two measurement modes as a final measurement: c i,j =λd (1) (i,j)+(1-λ)d (2) (i, j) in combination, both metrics complement each other by being in different aspects of the matching problem. On the one hand, mahalanobis distance provides information on the possible object positions based on motion, which is useful for short-term prediction, and on the other hand, cosine distance considers appearance information, which is useful for long-term post occlusion recovery identification, and to construct the correlation problem we use a weighted sum to combine the two metrics together. Only when c i,j Within the intersection of the two metric thresholds, a correct match is considered to be achieved. In addition, λ=0 may be set for the case where there is camera motion.
S44, using a cascade matching algorithm, a tracker is allocated to each detector, and a time_sine_update parameter is set for each tracker. If the tracker is matched and updated, the parameter is reset to 0, otherwise +1, and the sequence of matching the trackers is divided according to the parameter.
S45, matching the unconfirmed unmatched track and the detection target based on the IOU in the final matching stage is helpful for explaining suddenly appearing changes, for example, the target part is blocked, and the robustness of the system is improved.
In the drawings, the positional relationship is described for illustrative purposes only and is not to be construed as limiting the present patent; it is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (2)

1. A pedestrian multi-target tracking method using a depth correlation metric, the method comprising the steps of:
s1, training and fine-tuning on a known pedestrian detection data set by using a pre-trained model to serve as a target detector; the S1 specifically comprises the following steps:
s11, acquiring a pedestrian detection data set Caltech Pedestrian Detecion, and dividing the pedestrian detection data set into six equal parts randomly;
s12, training by adopting 6-fold cross-validation on a pedestrian detection data set by using a model which is already pre-trained on the ImageNet, and adjusting parameters as a target detector;
s2, building a feature extraction network, training on a pedestrian re-identification data set, and extracting target appearance information; the step S2 specifically comprises the following steps:
s21, acquiring a large-scale ReID data set, and dividing the data set into a training set, a testing set and a verification set according to a proportion;
s22, training is carried out on the training set, and finally 128-dimensional feature vectors are output;
s23, projecting the characteristic vector to an hypersphere after standardization;
s3, detecting each frame in the video by using a target detector, and carrying out track processing and state estimation on each target for extracting target motion information;
the step S3 specifically comprises the following steps:
s31, detecting each frame in the video by using a target detector, and marking pedestrians in the picture by using candidate frames;
s32, identifying and distinguishing each candidate frame by adopting different colors and IDs;
s33, using an 8-dimensional space to describe the state of the track at a certain moment, and then using Kalman filtering to predict and update the track;
s34, a counter is allocated to each track K, when the predefined maximum range A is exceeded max The tracks of (2) are deleted from the track set, and a new track hypothesis is started for each detection which cannot be associated with the existing track;
and S4, integrating the motion information and the appearance information of the frame selection target in each frame to match with the candidate frame of the previous frame.
2. The pedestrian multi-target tracking method using the depth correlation metric according to claim 1, wherein S4 specifically comprises:
s41, using the Marshall distance as a measure of motion information, and if the associated Marshall distance is smaller than a specified threshold value, considering that the motion state association is successful;
s42, extracting target appearance information of each detection target by using the trained feature extraction network in the S2, and calculating a minimum cosine distance as a measure of the appearance information;
s43, using the linear weighting of the two measurement modes as a final measurement;
s44, using a cascade matching algorithm, distributing a tracker to each target detector, setting a time_sine_update parameter for each tracker, and sequencing the trackers according to the parameter;
s45, matching the unmatched tracks and the detection targets based on the IOU in the final stage of matching.
CN202010457486.7A 2020-05-26 2020-05-26 Pedestrian multi-target tracking method using depth correlation measurement Active CN111626194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010457486.7A CN111626194B (en) 2020-05-26 2020-05-26 Pedestrian multi-target tracking method using depth correlation measurement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010457486.7A CN111626194B (en) 2020-05-26 2020-05-26 Pedestrian multi-target tracking method using depth correlation measurement

Publications (2)

Publication Number Publication Date
CN111626194A CN111626194A (en) 2020-09-04
CN111626194B true CN111626194B (en) 2024-02-02

Family

ID=72260018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010457486.7A Active CN111626194B (en) 2020-05-26 2020-05-26 Pedestrian multi-target tracking method using depth correlation measurement

Country Status (1)

Country Link
CN (1) CN111626194B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287846B (en) * 2020-10-30 2024-05-07 深圳市优必选科技股份有限公司 Target person following method, device, movable robot and readable storage medium
CN112308881B (en) * 2020-11-02 2023-08-15 西安电子科技大学 Ship multi-target tracking method based on remote sensing image
CN112669345B (en) * 2020-12-30 2023-10-20 中山大学 Cloud deployment-oriented multi-target track tracking method and system
CN113192105B (en) * 2021-04-16 2023-10-17 嘉联支付有限公司 Method and device for indoor multi-person tracking and attitude measurement
CN113034548B (en) * 2021-04-25 2023-05-26 安徽科大擎天科技有限公司 Multi-target tracking method and system suitable for embedded terminal
CN113469118B (en) * 2021-07-20 2024-05-21 京东科技控股股份有限公司 Multi-target pedestrian tracking method and device, electronic equipment and storage medium
CN114821795B (en) * 2022-05-05 2022-10-28 北京容联易通信息技术有限公司 Personnel running detection and early warning method and system based on ReiD technology

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816690A (en) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 Multi-target tracking method and system based on depth characteristic
CN110458868A (en) * 2019-08-15 2019-11-15 湖北经济学院 Multiple target tracking based on SORT identifies display systems
CN110532852A (en) * 2019-07-09 2019-12-03 长沙理工大学 Subway station pedestrian's accident detection method based on deep learning
CN110728216A (en) * 2019-09-27 2020-01-24 西北工业大学 Unsupervised pedestrian re-identification method based on pedestrian attribute adaptive learning
CN110782483A (en) * 2019-10-23 2020-02-11 山东大学 Multi-view multi-target tracking method and system based on distributed camera network
CN111161315A (en) * 2019-12-18 2020-05-15 北京大学 Multi-target tracking method and system based on graph neural network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4582174B2 (en) * 2008-03-28 2010-11-17 ソニー株式会社 Tracking processing device, tracking processing method, and program
US20160132728A1 (en) * 2014-11-12 2016-05-12 Nec Laboratories America, Inc. Near Online Multi-Target Tracking with Aggregated Local Flow Descriptor (ALFD)

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816690A (en) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 Multi-target tracking method and system based on depth characteristic
CN110532852A (en) * 2019-07-09 2019-12-03 长沙理工大学 Subway station pedestrian's accident detection method based on deep learning
CN110458868A (en) * 2019-08-15 2019-11-15 湖北经济学院 Multiple target tracking based on SORT identifies display systems
CN110728216A (en) * 2019-09-27 2020-01-24 西北工业大学 Unsupervised pedestrian re-identification method based on pedestrian attribute adaptive learning
CN110782483A (en) * 2019-10-23 2020-02-11 山东大学 Multi-view multi-target tracking method and system based on distributed camera network
CN111161315A (en) * 2019-12-18 2020-05-15 北京大学 Multi-target tracking method and system based on graph neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Nicolai Wojke 等.SIMPLE ONLINE AND REALTIME TRACKING WITH A DEEP ASSOCIATION METRIC.2017 IEEE International Conference on Image Processing.2017,第3645-3649页. *
乔虹 ; 冯全 ; 张芮 ; 刘阗宇 ; .基于时序图像跟踪的葡萄叶片病害动态监测.农业工程学报.2018,第34卷(第17期),第167-175页. *

Also Published As

Publication number Publication date
CN111626194A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111626194B (en) Pedestrian multi-target tracking method using depth correlation measurement
CN110516556B (en) Multi-target tracking detection method and device based on Darkflow-deep Sort and storage medium
Wojke et al. Simple online and realtime tracking with a deep association metric
Miao et al. Pose-guided feature alignment for occluded person re-identification
Bewley et al. Simple online and realtime tracking
CN108447080B (en) Target tracking method, system and storage medium based on hierarchical data association and convolutional neural network
CN110197502B (en) Multi-target tracking method and system based on identity re-identification
Simonnet et al. Re-identification of pedestrians in crowds using dynamic time warping
Al-Shakarji et al. Multi-object tracking cascade with multi-step data association and occlusion handling
Kieritz et al. Joint detection and online multi-object tracking
CN114240997B (en) Intelligent building online trans-camera multi-target tracking method
KR102132722B1 (en) Tracking method and system multi-object in video
Al-Shakarji et al. Robust multi-object tracking with semantic color correlation
Wang et al. Multi-Target Video Tracking Based on Improved Data Association and Mixed Kalman/$ H_ {\infty} $ Filtering
Chu et al. Fully unsupervised learning of camera link models for tracking humans across nonoverlapping cameras
CN113537077A (en) Label multi-Bernoulli video multi-target tracking method based on feature pool optimization
He et al. Fast online multi-pedestrian tracking via integrating motion model and deep appearance model
CN114972410A (en) Multi-level matching video racing car tracking method and system
Mittal et al. Pedestrian detection and tracking using deformable part models and Kalman filtering
CN112307897A (en) Pet tracking method based on local feature recognition and adjacent frame matching in community monitoring scene
Jiang et al. Online pedestrian tracking with multi-stage re-identification
Wojke et al. Joint operator detection and tracking for person following from mobile platforms
CN113657169B (en) Gait recognition method, device and system and computer readable storage medium
CN108346158B (en) Multi-target tracking method and system based on main block data association
Bai et al. Pedestrian Tracking and Trajectory Analysis for Security Monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant