CN111626194A - Pedestrian multi-target tracking method using depth correlation measurement - Google Patents

Pedestrian multi-target tracking method using depth correlation measurement Download PDF

Info

Publication number
CN111626194A
CN111626194A CN202010457486.7A CN202010457486A CN111626194A CN 111626194 A CN111626194 A CN 111626194A CN 202010457486 A CN202010457486 A CN 202010457486A CN 111626194 A CN111626194 A CN 111626194A
Authority
CN
China
Prior art keywords
target
pedestrian
frame
training
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010457486.7A
Other languages
Chinese (zh)
Other versions
CN111626194B (en
Inventor
杨海东
杨航
黄坤山
彭文瑜
林玉山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Foshan Guangdong University CNC Equipment Technology Development Co. Ltd
Original Assignee
Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Foshan Guangdong University CNC Equipment Technology Development Co. Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute, Foshan Guangdong University CNC Equipment Technology Development Co. Ltd filed Critical Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Priority to CN202010457486.7A priority Critical patent/CN111626194B/en
Publication of CN111626194A publication Critical patent/CN111626194A/en
Application granted granted Critical
Publication of CN111626194B publication Critical patent/CN111626194B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a pedestrian multi-target tracking method using depth correlation measurement, which comprises S1, training and fine-tuning a pre-trained model on a known pedestrian detection data set to serve as a target detector; s2, building a feature extraction network, training on the pedestrian re-identification data set, and extracting target appearance information; s3, detecting each frame in the video by using a target detector, and performing track processing and state estimation on each target for extracting target motion information; s4, integrating the motion information and the appearance information of the target selected from each frame with the candidate frame of the previous frame for matching, adopting the neural network for detection, improving the accuracy and integrating the motion information and the appearance information, greatly improving the tracking effect of the shielded target, improving the matching precision, and simultaneously using the cascade matching to track the target through a longer blocking period, thereby effectively reducing the number of identity switching and improving the robustness of the system.

Description

Pedestrian multi-target tracking method using depth correlation measurement
Technical Field
The invention relates to the technical field of multi-target tracking, in particular to a pedestrian multi-target tracking method using depth correlation measurement.
Background
The intelligent video monitoring technology is a new research direction in the field of computer vision in recent years, integrates advanced technologies in different fields of image processing, pattern recognition, artificial intelligence, automatic control and the like, combines computer vision with networked video monitoring, and realizes functions of detection, recognition, tracking, behavior analysis and the like of moving targets in videos. The pedestrian multi-target tracking is a difficult point in the field of intelligent video monitoring, because pedestrians have more flexibility than vehicles when moving, and the pedestrians are not rigid bodies, the contour characteristics are constantly changed and are not easy to extract, so that many problems are brought to the tracking accuracy and the calculation complexity of an algorithm, and the pedestrian tracking in practical application has commercial value.
The Tracking By Detection uses a target Detection algorithm to detect the targets of interest in each frame, and obtains corresponding indexes such as position coordinates, classification, reliability and the like, and the Detection results in the previous step are supposed to be associated with the Detection targets in the previous frame one By one in a certain mode, and the most important in the Tracking By Detection is the Detection algorithm and how to perform data association.
Due to the development and application of convolutional neural networks (CNN for short), tasks in many computer vision fields are greatly developed, and meanwhile, many target methods based on CNN are also applied to solve the problem of image recognition. However, common data association methods are not usually available, and in the invention, appearance information is integrated, so that objects can be tracked in a longer occlusion period, and the number of identity switching is effectively reduced.
Disclosure of Invention
Aiming at the problems, the invention provides a pedestrian multi-target tracking method using depth correlation measurement, which mainly solves the problems in the background technology.
The invention provides a pedestrian multi-target tracking method using depth correlation measurement, which comprises the following steps:
s1, training and fine-tuning a known pedestrian detection data set by using a pre-trained model to serve as a target detector;
s2, building a feature extraction network, training on the pedestrian re-identification data set, and extracting target appearance information;
s3, detecting each frame in the video by using a target detector, and performing track processing and state estimation on each target for extracting target motion information;
and S4, integrating the motion information and the appearance information of the selected target in each frame to match with the candidate frame in the previous frame.
In a further improvement, the S1 specifically includes:
s11, acquiring a Caltech Pedestrian detection data set, and randomly dividing the Caltech Pedestrian detection data set into six equal parts;
s12, using the model that has been pre-trained on ImageNet, cross-validation training with 6-fold on the pedestrian detection dataset and adjusting the parameters as target detectors.
In a further improvement, the S2 specifically includes:
s21, acquiring a large-scale ReiD data set, and dividing the large-scale ReiD data set into a training set, a test set and a verification set according to the proportion;
s22, training on the training set, and finally outputting a 128-dimensional feature vector;
and S23, projecting the feature vector to a hypersphere after normalization.
In a further improvement, the S3 specifically includes:
s31, detecting each frame in the video by using a target detector, and marking the pedestrian in the picture by using a candidate frame;
s32, identifying and distinguishing by adopting different colors and IDs for each candidate box;
s33, an 8-dimensional space is used for depicting the state of the track at a certain moment, and then Kalman filtering is used for predicting and updating the track;
s34, assigning a counter to each track K, when exceeding the predefined maximum range AmaxWill be removed from the set of trajectories and a new trajectory hypothesis will be initiated for each detection that cannot be associated with an existing trajectory.
In a further improvement, the S4 specifically includes:
s41, using the Mahalanobis distance as the measurement of the motion information, and if the associated Mahalanobis distance is smaller than a specified threshold value, considering that the motion state association is successful;
s42, extracting the target appearance information of each detection target by using the trained feature extraction network in S2, and calculating the minimum cosine distance as the measurement of the appearance information;
s43, linear weighting of the two measurement modes is used as final measurement;
s44, using a cascade matching algorithm, assigning a tracker for each target detector, setting a time _ sequence _ update parameter for each tracker, and sequencing the trackers according to the parameters;
and S45, matching the unmatched track and the detection target based on the IOU in the final stage of matching.
Compared with the prior art, the invention has the beneficial effects that: the method overcomes the defects of low speed and frequent identity switching under the condition of object shielding in the traditional tracking method, can be applied to video monitoring scenes in areas with large human flow, such as crossroads and the like, and carries out detection through a neural network, improves the accuracy, integrates motion information and appearance information, greatly improves the tracking effect of a shielded target, improves the matching precision, and can track the object through a longer blocking period by using cascade matching, thereby effectively reducing the number of identity switching and improving the robustness of the system.
Drawings
The drawings are for illustrative purposes only and are not to be construed as limiting the patent; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
FIG. 1 is a schematic overall flow chart of an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a pedestrian target identification process according to an embodiment of the present invention;
FIG. 3 is a schematic overall flow chart of the detection step according to an embodiment of the present invention.
Detailed Description
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted" and "connected" are to be interpreted broadly, e.g., as being either fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, so to speak, as communicating between the two elements. The specific meaning of the above terms in the present invention can be understood in specific cases to those skilled in the art. The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1:
referring to fig. 1-3, a pedestrian multi-target tracking method using a depth correlation metric, the method comprising the steps of:
s1, training and fine-tuning on a known pedestrian data set using a pre-trained model, as a target detector. The method comprises the following specific steps:
s11, acquiring a Pedestrian detection data set Caltech Pedestrian detection, wherein 250000 frames, 350000 rectangular frames and 2300 pedestrians are marked.
S12, using a model pre-trained on ImageNet, adopting 6-fold cross validation training on a data set, randomly dividing the data set into 6 equal parts, selecting 5 of the 6 equal parts for training, testing the other one, adjusting parameters, and repeating the steps for a plurality of times until the network converges.
S2, building a feature extraction network, training the network by using a training set, and extracting the target appearance letter, wherein the method specifically comprises the following steps:
s21, obtaining a large-scale ReiD data set MARS, dividing the large-scale ReiD data set MARS into a training set, a testing set and a verification set according to the ratio of 6:2:2
S22, training the model on the training set, iterating the model parameters to obtain the model parameters for extracting the target appearance characteristics, wherein the network structure comprises 2 convolution layers, 6 residual blocks and 1 full-connection layer, outputting a 128-dimensional characteristic vector
S23 projecting features to a unified hypersphere using L2 normalization
S3, detecting each frame in the video by using a detector, and performing track processing and state estimation on each target, wherein the method comprises the following specific steps:
and S31, detecting each frame in the video by using a detector, and marking the pedestrian in the picture by using a candidate frame.
S32, identifying distinctions using different colors and I D for each candidate box.
And S33, using an 8-dimensional space to depict the state of the track at a certain time, and respectively representing the position, the aspect ratio, the height and the corresponding speed information in the image coordinates of the candidate frame center. The updated trajectory is then predicted using a Kalman filter that employs a uniform velocity model and a linear observation model. The observed variables are (x, y, γ, h).
S34, for each trace k, we calculate the number of frames A since the last successful measurement correlationkThis counter is incremented during kalman filter prediction and reset to 0 at tracking. Exceeding a predefined maximum range AmaxIs considered to have left the scene and is removed from the set of tracks. For each detection that cannot be associated with an existing track, a new track hypothesis is initiated, which is classified as tentative, and if the match succeeds in the next three consecutive frames, it is considered a new track generation, labeled asAnd (4) confirmed, otherwise, the track is considered as a false track, and the state is marked as deleted and deleted.
S4, integrating the motion information and the appearance information of the selected target in each frame with the frame of the previous frame for matching, and the method specifically comprises the following steps:
s41, correlating the motion information by using the Mahalanobis distance between the prediction result and the detection result of the Kalman filtering of the motion state of the existing motion target:
Figure BDA0002509845570000061
Figure BDA0002509845570000071
wherein d isjIndicates the position of the jth detection frame, yiIndicating the predicted position of the i-th tracker on the target, yiIndicating the predicted position of the target by the i-th tracker, SiRepresenting a covariance matrix between the detected position and the average tracked position. Mahalanobis distance takes into account uncertainty in the state measurement by calculating the standard deviation between the detected position and the average tracked position. If the Mahalanobis distance associated with a certain time is less than a specified threshold t(1)The association of the motion state is set to be successful and the function used is
Figure BDA0002509845570000072
S42, detecting each block djUsing the network built in S2 to obtain a feature vector riAs appearance descriptors. Meanwhile, a gallory is constructed for each tracked target, and the feature vector of the latest 100 frames successfully associated with each tracked target is stored. And then calculating the minimum cosine distance between the feature set of the ith tracker which is the latest 100 successfully associated and the feature vector of the current jth detection result as the similarity measurement. The calculation formula is as follows:
Figure BDA0002509845570000073
Figure BDA0002509845570000074
thirdly, I amThey introduce a binary variable to indicate whether correlation is allowed according to this metric:
Figure BDA0002509845570000075
s43, using linear weighting of the two measurement modes as the final measurement: c. Ci,j=λd(1)(i,j)+(1-λ)d(2)(i, j) in combination, both metrics complement each other by being in different aspects of the matching problem. Mahalanobis distance, on the one hand, provides information based on the likely object position of motion, which is useful for short-term prediction, and cosine distance, on the other hand, takes into account appearance information, which is useful for long-term post-occlusion recovery identification, and we use a weighted sum to combine these two metrics together in order to build the correlation problem. Only when ci,jA correct match is considered to have been achieved when it is within the intersection of the two metric thresholds. In addition, λ ═ 0 may be set for the case where there is camera motion.
S44, using a cascade matching algorithm, a tracker is assigned for each detector, and each tracker sets a time _ sequence _ update parameter. If the tracker completes matching and updates, the parameter is reset to 0, otherwise, the parameter is +1, and the tracker is sorted according to the matching sequence of the parameter.
S45, carrying out IOU-based matching on the unmatched track of the unconfirmed and the detected target in the final stage of matching, which is helpful to explain the sudden change, for example, the target part is blocked, and the robustness of the system is improved.
In the drawings, the positional relationship is described for illustrative purposes only and is not to be construed as limiting the present patent; it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (5)

1. A pedestrian multi-target tracking method using a depth correlation metric, the method comprising the steps of:
s1, training and fine-tuning a known pedestrian detection data set by using a pre-trained model to serve as a target detector;
s2, building a feature extraction network, training on the pedestrian re-identification data set, and extracting target appearance information;
s3, detecting each frame in the video by using a target detector, and performing track processing and state estimation on each target for extracting target motion information;
and S4, integrating the motion information and the appearance information of the selected target in each frame to match with the candidate frame in the previous frame.
2. The pedestrian multi-target tracking method using the depth correlation metric according to claim 1, wherein the S1 specifically includes:
s11, acquiring a Caltech Pedestrian detection data set, and randomly dividing the Caltech Pedestrian detection data set into six equal parts;
s12, using the model that has been pre-trained on ImageNet, cross-validation training with 6-fold on the pedestrian detection dataset and adjusting the parameters as target detectors.
3. The pedestrian multi-target tracking method using the depth correlation metric according to claim 1, wherein the S2 specifically includes:
s21, acquiring a large-scale ReiD data set, and dividing the large-scale ReiD data set into a training set, a test set and a verification set according to the proportion;
s22, training on the training set, and finally outputting a 128-dimensional feature vector;
and S23, projecting the feature vector to a hypersphere after normalization.
4. The pedestrian multi-target tracking method using the depth correlation metric according to claim 1, wherein the S3 specifically includes:
s31, detecting each frame in the video by using a target detector, and marking the pedestrian in the picture by using a candidate frame;
s32, identifying and distinguishing by adopting different colors and IDs for each candidate box;
s33, an 8-dimensional space is used for depicting the state of the track at a certain moment, and then Kalman filtering is used for predicting and updating the track;
s34, assigning a counter to each track K, when exceeding the predefined maximum range AmaxWill be removed from the set of trajectories and a new trajectory hypothesis will be initiated for each detection that cannot be associated with an existing trajectory.
5. The pedestrian multi-target tracking method using the depth correlation metric according to claim 1, wherein the S4 specifically includes:
s41, using the Mahalanobis distance as the measurement of the motion information, and if the associated Mahalanobis distance is smaller than a specified threshold value, considering that the motion state association is successful;
s42, extracting the target appearance information of each detection target by using the trained feature extraction network in S2, and calculating the minimum cosine distance as the measurement of the appearance information;
s43, linear weighting of the two measurement modes is used as final measurement;
s44, using a cascade matching algorithm, assigning a tracker for each target detector, setting a time _ sequence _ update parameter for each tracker, and sequencing the trackers according to the parameters;
and S45, matching the unmatched track and the detection target based on the IOU in the final stage of matching.
CN202010457486.7A 2020-05-26 2020-05-26 Pedestrian multi-target tracking method using depth correlation measurement Active CN111626194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010457486.7A CN111626194B (en) 2020-05-26 2020-05-26 Pedestrian multi-target tracking method using depth correlation measurement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010457486.7A CN111626194B (en) 2020-05-26 2020-05-26 Pedestrian multi-target tracking method using depth correlation measurement

Publications (2)

Publication Number Publication Date
CN111626194A true CN111626194A (en) 2020-09-04
CN111626194B CN111626194B (en) 2024-02-02

Family

ID=72260018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010457486.7A Active CN111626194B (en) 2020-05-26 2020-05-26 Pedestrian multi-target tracking method using depth correlation measurement

Country Status (1)

Country Link
CN (1) CN111626194B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287846A (en) * 2020-10-30 2021-01-29 深圳市优必选科技股份有限公司 Target person following method, device, mobile robot and readable storage medium
CN112308881A (en) * 2020-11-02 2021-02-02 西安电子科技大学 Ship multi-target tracking method based on remote sensing image
CN112669345A (en) * 2020-12-30 2021-04-16 中山大学 Cloud deployment-oriented multi-target track tracking method and system
CN113034548A (en) * 2021-04-25 2021-06-25 安徽科大擎天科技有限公司 Multi-target tracking method and system suitable for embedded terminal
CN113192105A (en) * 2021-04-16 2021-07-30 嘉联支付有限公司 Method and device for tracking multiple persons and estimating postures indoors
CN113469118A (en) * 2021-07-20 2021-10-01 京东科技控股股份有限公司 Multi-target pedestrian tracking method and device, electronic equipment and storage medium
CN114821795A (en) * 2022-05-05 2022-07-29 北京容联易通信息技术有限公司 Personnel running detection and early warning method and system based on ReiD technology

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090245577A1 (en) * 2008-03-28 2009-10-01 Yuyu Liu Tracking Processing Apparatus, Tracking Processing Method, and Computer Program
US20160132728A1 (en) * 2014-11-12 2016-05-12 Nec Laboratories America, Inc. Near Online Multi-Target Tracking with Aggregated Local Flow Descriptor (ALFD)
CN109816690A (en) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 Multi-target tracking method and system based on depth characteristic
CN110458868A (en) * 2019-08-15 2019-11-15 湖北经济学院 Multiple target tracking based on SORT identifies display systems
CN110532852A (en) * 2019-07-09 2019-12-03 长沙理工大学 Subway station pedestrian's accident detection method based on deep learning
CN110728216A (en) * 2019-09-27 2020-01-24 西北工业大学 Unsupervised pedestrian re-identification method based on pedestrian attribute adaptive learning
CN110782483A (en) * 2019-10-23 2020-02-11 山东大学 Multi-view multi-target tracking method and system based on distributed camera network
CN111161315A (en) * 2019-12-18 2020-05-15 北京大学 Multi-target tracking method and system based on graph neural network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090245577A1 (en) * 2008-03-28 2009-10-01 Yuyu Liu Tracking Processing Apparatus, Tracking Processing Method, and Computer Program
US20160132728A1 (en) * 2014-11-12 2016-05-12 Nec Laboratories America, Inc. Near Online Multi-Target Tracking with Aggregated Local Flow Descriptor (ALFD)
CN109816690A (en) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 Multi-target tracking method and system based on depth characteristic
CN110532852A (en) * 2019-07-09 2019-12-03 长沙理工大学 Subway station pedestrian's accident detection method based on deep learning
CN110458868A (en) * 2019-08-15 2019-11-15 湖北经济学院 Multiple target tracking based on SORT identifies display systems
CN110728216A (en) * 2019-09-27 2020-01-24 西北工业大学 Unsupervised pedestrian re-identification method based on pedestrian attribute adaptive learning
CN110782483A (en) * 2019-10-23 2020-02-11 山东大学 Multi-view multi-target tracking method and system based on distributed camera network
CN111161315A (en) * 2019-12-18 2020-05-15 北京大学 Multi-target tracking method and system based on graph neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NICOLAI WOJKE 等: "SIMPLE ONLINE AND REALTIME TRACKING WITH A DEEP ASSOCIATION METRIC", pages 3645 - 3649 *
乔虹;冯全;张芮;刘阗宇;: "基于时序图像跟踪的葡萄叶片病害动态监测", vol. 34, no. 17, pages 167 - 175 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287846A (en) * 2020-10-30 2021-01-29 深圳市优必选科技股份有限公司 Target person following method, device, mobile robot and readable storage medium
CN112308881A (en) * 2020-11-02 2021-02-02 西安电子科技大学 Ship multi-target tracking method based on remote sensing image
CN112308881B (en) * 2020-11-02 2023-08-15 西安电子科技大学 Ship multi-target tracking method based on remote sensing image
CN112669345A (en) * 2020-12-30 2021-04-16 中山大学 Cloud deployment-oriented multi-target track tracking method and system
CN112669345B (en) * 2020-12-30 2023-10-20 中山大学 Cloud deployment-oriented multi-target track tracking method and system
CN113192105A (en) * 2021-04-16 2021-07-30 嘉联支付有限公司 Method and device for tracking multiple persons and estimating postures indoors
CN113192105B (en) * 2021-04-16 2023-10-17 嘉联支付有限公司 Method and device for indoor multi-person tracking and attitude measurement
CN113034548A (en) * 2021-04-25 2021-06-25 安徽科大擎天科技有限公司 Multi-target tracking method and system suitable for embedded terminal
CN113469118A (en) * 2021-07-20 2021-10-01 京东科技控股股份有限公司 Multi-target pedestrian tracking method and device, electronic equipment and storage medium
CN114821795A (en) * 2022-05-05 2022-07-29 北京容联易通信息技术有限公司 Personnel running detection and early warning method and system based on ReiD technology
CN114821795B (en) * 2022-05-05 2022-10-28 北京容联易通信息技术有限公司 Personnel running detection and early warning method and system based on ReiD technology

Also Published As

Publication number Publication date
CN111626194B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN111626194B (en) Pedestrian multi-target tracking method using depth correlation measurement
CN109344725B (en) Multi-pedestrian online tracking method based on space-time attention mechanism
CN110197502B (en) Multi-target tracking method and system based on identity re-identification
CN108447080B (en) Target tracking method, system and storage medium based on hierarchical data association and convolutional neural network
CN107145862B (en) Multi-feature matching multi-target tracking method based on Hough forest
CN106778712B (en) Multi-target detection and tracking method
CN110288627B (en) Online multi-target tracking method based on deep learning and data association
Kieritz et al. Joint detection and online multi-object tracking
CN106934817B (en) Multi-attribute-based multi-target tracking method and device
Lee et al. Learning discriminative appearance models for online multi-object tracking with appearance discriminability measures
CN114240997B (en) Intelligent building online trans-camera multi-target tracking method
Al-Shakarji et al. Robust multi-object tracking with semantic color correlation
KR20200061118A (en) Tracking method and system multi-object in video
CN111739053A (en) Online multi-pedestrian detection tracking method under complex scene
CN112200021A (en) Target crowd tracking and monitoring method based on limited range scene
CN113192105A (en) Method and device for tracking multiple persons and estimating postures indoors
He et al. Fast online multi-pedestrian tracking via integrating motion model and deep appearance model
Ali et al. Deep Learning Algorithms for Human Fighting Action Recognition.
CN113033523B (en) Method and system for constructing falling judgment model and falling judgment method and system
CN113537077A (en) Label multi-Bernoulli video multi-target tracking method based on feature pool optimization
CN106934339B (en) Target tracking and tracking target identification feature extraction method and device
CN112307897A (en) Pet tracking method based on local feature recognition and adjacent frame matching in community monitoring scene
CN115188081B (en) Complex scene-oriented detection and tracking integrated method
Wojke et al. Joint operator detection and tracking for person following from mobile platforms
Elbaşi Fuzzy logic-based scenario recognition from video sequences

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant