CN110688940A - Rapid face tracking method based on face detection - Google Patents

Rapid face tracking method based on face detection Download PDF

Info

Publication number
CN110688940A
CN110688940A CN201910911024.5A CN201910911024A CN110688940A CN 110688940 A CN110688940 A CN 110688940A CN 201910911024 A CN201910911024 A CN 201910911024A CN 110688940 A CN110688940 A CN 110688940A
Authority
CN
China
Prior art keywords
face
track
current frame
method based
tracking method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910911024.5A
Other languages
Chinese (zh)
Inventor
周继乐
储超群
吕成涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiaxing Guang Mu Technology Co Ltd
Beijing Purple Eye Technology Co Ltd
Original Assignee
Jiaxing Guang Mu Technology Co Ltd
Beijing Purple Eye Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiaxing Guang Mu Technology Co Ltd, Beijing Purple Eye Technology Co Ltd filed Critical Jiaxing Guang Mu Technology Co Ltd
Priority to CN201910911024.5A priority Critical patent/CN110688940A/en
Publication of CN110688940A publication Critical patent/CN110688940A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention discloses a rapid face tracking method based on face detection, belonging to the technical field of computer multimedia, and the rapid face tracking method based on face detection comprises the following steps: s1: inputting a current frame image, carrying out face detection on the current frame image, and calculating the association degree of each face in the set D and each track in the track T; s2: matching, namely calculating the correlation matrix, and performing binary matching by using a Kuhn-Munkres algorithm; s3: after the above process, the face tracking result on the current frame is obtained, and the face track on the whole video can be obtained by sequentially executing the above operations on all the subsequent frames in the video. The invention mainly aims at a rapid face tracking method based on face detection, which can track the face appearing in a video, improve the operation speed on the premise of ensuring the tracking accuracy, and still ensure successful tracking when the face is lost, reappears and the like.

Description

Rapid face tracking method based on face detection
Technical Field
The invention relates to the technical field of computer multimedia, in particular to a rapid face tracking method based on face detection.
Background
The target tracking has a plurality of application occasions, is applied to the field of video monitoring, can track a specific target in a video, can fly along with a pedestrian automatically when being applied to the automatic tracking of an unmanned aerial vehicle, can track the gesture of the pedestrian when being applied to an intelligent interaction system, and enhances the interaction experience;
the method of target tracking is divided into a method based on a generating model and a method based on a discriminant model, the method based on the generating model has the following steps, the Meanshift method is a method based on probability density distribution, the target is searched along the rising direction of the probability density gradient, and the target is converged on the local maximum value of the probability density distribution through continuous iteration, the method has the advantage of high calculation speed, but the method can only be used in the situation that the difference between the color of the target and the background color is large, the Particle Filter (Particle Filter) method is a method based on Particle distribution statistics, firstly, the target is modeled, the similarity between the particles and the target is defined, when the target is searched, the particles are scattered in the current frame, the similarity between the particles and the target is counted, and the possible position of the target is predicted, the Kalman Filter method is used for establishing a motion model of the target, can be used to estimate the position of the target in the next frame;
the method based on the discriminant model is to track by a classification method, the method of tracking by Detection is more and more common, the TLD (tracking Learning Detection) method comprises three parts, a tracker predicts the position of a target in the next frame by adopting a characteristic point statistical method, a detector detects the target and synthesizes the result of the tracker, a learner is used for correcting the tracking result and the detector, the CSK method is a method based on related filtering, dense sampling is realized by a cyclic matrix, classifier Learning is carried out by Fourier transform, the KCF method has very high operation speed and can reach more than 100FPS, the MOSSE method is improved by a T method, the scale change of the target can be processed, and the related filtering method has the advantages of high operation speed and the defects of incapability of adapting to the rapid movement and shape change of the target;
the tracking method based on deep learning is more and more widely applied, because the manually designed features have weaker expression capability on the target, the target feature information obtained through deep learning is richer, the deep network model is trained by utilizing data, the convolution feature of the target is extracted, the convolution feature of the target is directly applied to related filtering at the initial stage, then a better tracking result can be obtained, the HOG feature in the SRDCF is changed into the convolution feature by the deep SRDCF, the tracking precision is greatly improved, the ECO method is accelerated from the three aspects of the model size, the sample set size and the updating strategy, the operation speed is improved while the tracking accuracy is ensured, the GOTURN method extracts the convolution feature from the current frame and the previous frame and then sends the convolution feature to the full-connection layer to predict the change of the target position;
the traditional tracking method is low in tracking precision, the deep learning-based method is low in operation speed, and most target tracking methods have the problems of low operation speed or low tracking accuracy and tracking track recall rate.
Therefore, a rapid face tracking method based on face detection is provided.
Disclosure of Invention
The present invention provides a fast face tracking method based on face detection to solve the problems in the background art.
In order to achieve the purpose, the invention provides the following technical scheme: a rapid face tracking method based on face detection comprises the following steps:
s1: inputting a current frame image, carrying out face detection on the current frame image to obtain a face set D of the current frame, obtaining a track set T cut to a previous frame before face tracking, and then calculating the association degree of each face in the set D and each track in the track T;
s2: matching, namely calculating the correlation matrix, performing binary matching by using a Kuhn-Munkres algorithm, evaluating the tracks in the track set T according to the quality of the tracks, and dividing the tracks into high-quality tracks and low-quality tracks;
s3: after the above process, the face tracking result on the current frame is obtained, and the face track on the whole video can be obtained by sequentially executing the above operations on all the subsequent frames in the video.
Preferably, in S2, the correlation calculation needs to take into account the characteristic distance between the two, the center point offset between the two positions, and the shape change between the two positions.
Preferably, in S1, when calculating the degree of association between the face of the current frame and the face in the trajectory, the position information of the face and the feature information of the face need to be considered, and the feature information of the face is extracted by the face re-recognition network.
Preferably, in S2, each face in D is first tried to be matched with the high quality track in T, then the remaining part of faces in D that are not successfully matched are matched, and then the faces are matched with the low quality track in T.
Preferably, after the matching process is finished, if there are faces that have not been successfully matched in D, the faces are regarded as new faces to be added, and the new faces are taken as a starting point of a new face trajectory and added to the trajectory set T.
Preferably, if there is a track that is not successfully matched in T, it cannot be directly assumed that the track has ended, but a threshold is set for the number of frames that the track is continuously lost, if the number of frames that the track is continuously unsuccessfully matched exceeds the threshold, it is assumed that the track has indeed ended, otherwise, a position of a face is predicted for the track by using a kalman filtering method, and the position is taken as a track result on the current frame, and after the above process, a face tracking result on the current frame is obtained.
Compared with the prior art, the invention has the beneficial effects that: the matching degree is calculated by combining the face position information and the face appearance characteristic information, so that the accuracy of face tracking can be improved, and the influence of shielding and illumination on the face is reduced as much as possible; the small face re-recognition network is provided for extracting the face features, can save the calculation time compared with a large face feature extraction network, and can provide enough face information for face matching; a method of Hamming distance and optimization mapping hash replaces the traditional calculation of Euclidean distance, and the improvement can greatly reduce the calculation time; by adopting the cascade connection matching scheme, the tracking precision can be improved, and the problems of face loss and recurrence can be effectively solved.
Drawings
Fig. 1 is an overall work flow diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention provides a technical solution: a rapid face tracking method based on face detection comprises the following steps:
s1: inputting a current frame image, carrying out face detection on the current frame image to obtain a face set D of the current frame, obtaining a track set T cut to a previous frame before face tracking is carried out, and then calculating the association degree of each face in the set D and each track in the track T, wherein the characteristic distance between the two faces, the central point offset of the positions of the two faces and the shape change of the positions of the two faces need to be considered during association degree calculation;
when calculating the association degree between the face of the current frame and the face in the track, the position information of the face and the feature information of the face need to be considered, the position information of the face is a five-dimensional vector and consists of the central position coordinate of a face frame, the height of the face frame, the width-height ratio of the face frame and the confidence coefficient of the face frame, and the feature information of the face is extracted by the face feature network mentioned below;
extracting the face features:
because the number of human faces appearing between different frames in the video is not large, the human face features only need to have certain distinguishing degree, and the human face features are not needed to be very detailed, the network scale for extracting the human face features can be designed to be smaller;
firstly, training a network for face classification, wherein the network can extract the features of the face before an output layer, so that the features of the face can be extracted before an input layer; because ResNet can well realize the classification task, a face re-recognition network TFNet (tracking Feature Net) is designed based on ResNet, and the structure of the network is shown in Table 1:
TABLE 1 face re-recognition network architecture
Figure BDA0002214701590000051
The first two layers of the network are convolution networks, next four layers of ResNet, next a full connection layer with 128 nodes, and finally a Softmax layer with 2622 nodes, wherein the layer is a network for realizing face classification, therefore, in the 128 nodes of the full connection layer, the output is face features, and the face features can be used for subsequent face tracking;
calculating the spatial position association degree:
because the face position of the current frame and all face positions in the track have a certain internal relation, the relevance between the face of the current frame and one face in a certain track cannot be calculated independently, but the relevance between the face of the current frame and all faces in a certain track is considered comprehensively, information capable of describing the distribution characteristics of all faces in a track, namely a mean vector and a covariance matrix of the face positions in a track, is selected, and the mahalanobis distance between the face position of the current frame and the face position of the track is calculated as the evaluation of the similarity:
Figure BDA0002214701590000052
in the formula (d)jRepresenting the position vector, y, of the jth face in the current frameiTo representMean vector of face position of ith trajectory, SiRepresenting the covariance matrix of the ith track, and then defining a method for screening the correlation degree:
g1(i,j)=1(d(1)(i,j)≤t1) (formula 2)
The above expression shows that if the association degree of the jth face of the current frame and the ith track is greater than the screening threshold t1The two are considered to have no relation;
and (3) calculating the relevance of the appearance features:
because a face in a video may be interfered by occlusion, illumination, etc., if only position information is used, the face cannot be successfully matched with a track, the information of the face appearance characteristics is considered, so that even if the face is interfered by occlusion, illumination, etc. factors on an image of a certain frame, if the image appears in a certain previous frame, the face in the frame can be successfully matched with the track, and the formula for calculating the association degree of the appearance characteristics of the face and the track in the current frame is as follows:
Figure BDA0002214701590000061
in the formula, rjIs the appearance characteristic of the jth face in the current frame,is the appearance characteristic of the kth face in the ith track, RiIs the set of facial features in the ith track,
Figure BDA0002214701590000063
calculating Euclidean distances between the two, selecting the minimum distance between the two when calculating the relevance of the appearance characteristics, and defining a screening matrix for the relevance:
g2(i,j)=1(d2(i,j)≤t2) (formula 4)
The formula shows that if the face appearance association degree of the jth face and the ith track in the current frameGreater than a threshold value t2If the two are not related, the two are considered to be unrelated;
when the relevance of the face appearance features is calculated in the formula (3), considering that the Euclidean distance between N faces of a previous frame and the face of a current frame in all the tracks needs to be calculated, the calculation amount is large, and therefore the calculation needs to be optimized;
here, the relevance of the face appearance features may be calculated by using a hamming distance plus an Optimized mapping hash method, and an OPH (Optimized Projection for Hashing) method, which is a method of hash-coding features, can save calculation time when calculating the distance, and can retain most feature information, so when calculating the distance between two face appearance features, the following formula is used:
Figure BDA0002214701590000071
the method comprises the steps of firstly carrying out Hash coding on appearance characteristics of a jth face and a kth face in an ith track in a current frame, then calculating Hamming distances of the jth face and the ith face after coding to serve as distance measurement of two characteristics, wherein the optimization can greatly reduce the calculation time, and particularly when the number of faces to be calculated is large, the reduction of calculation amount caused by the optimization is more obvious;
and (3) correlation matrix calculation:
after the face spatial position relevance matrix and the face appearance characteristic relevance matrix are obtained, a comprehensive relevance matrix needs to be defined for comprehensively considering information of two relevance degrees;
c(i,j)=λd1(i,j)+(1-λ)d2(i, j) (formula 6)
In the above equation, λ is a weight, generally set to 0.8, and the form of the comprehensive screening expression is defined as:
g(i,j)=g1(i,j)*g2(i, j) (formula 7)
S2: matching, namely calculating the correlation matrix, performing binary matching by using a Kuhn-Munkres algorithm, evaluating the tracks in a track set T according to the quality of the tracks, dividing the tracks into high-quality tracks and low-quality tracks, trying to match each face in D with the high-quality tracks in T, then remaining faces which are not successfully matched in D, and matching the faces with the low-quality tracks in T;
the cascade connection matching method comprises the following steps:
for the traditional target tracking method, after the relevance is obtained, the relevance matching is carried out by adopting a KM (Kuhn-Munkres) algorithm, wherein the KM algorithm is an optimal matching method for bipartite graphs, but for the problem of face tracking, because the face in a video frame is often influenced by shielding and illumination, the KM method cannot be directly used;
by taking the thought in the SORT algorithm as a reference, the problems of face loss and recurrence in the face tracking process can be solved by using a cascade association matching method;
inputting a face set D of a current frame, stopping to a face track set T of a previous frame, and outputting a maximum loss frame number A allowed by a track as a face and track set successfully matched and a face set failed in matching;
firstly, calculating an association matrix of the face and the track according to the formula (6), and calculating a screening matrix according to the formula (7);
secondly, enabling the successfully matched face and track set to be S, and initializing the face and track set into an empty set; initializing a face set with failed matching into a U set, and initializing the face set into a D set;
let n be 0;
thirdly, if n is larger than A, turning to the fourth step;
Bn={Ti|Ti∈T,Tin frames already exist in the picture without matching the upper face };
mixing C and BnU is input into KM algorithm to obtain BnA result of matching with a face { X (i, j) };
updating S, S { (i, j) g (i, j) × (i, j) > 0} by the screening matrix;
deleting the faces successfully matched in the previous step in the U;
n is n +1, and go to the third step;
fourthly, outputting a face and track set S which is successfully matched and a face set U which is unsuccessfully matched;
for the face and the track successfully matched in the above method, the tracking result of the track in the current frame is considered to be the matched face, the face set U which is unsuccessfully matched in the above method is not directly discarded, but is taken as a newly added face and taken as a starting point of a new track, in the subsequent track tracking, the newly added face track is also tracked, for the track set which is unsuccessfully matched, the position of one face is predicted by using a Kalman filtering method for each track and taken as the track result on the current frame, and if the number of frames which are not successfully matched in the track continuously exceeds a threshold value, the track is considered to be ended;
s3: after the process, a face tracking result on the current frame is obtained, and the face track on the whole video can be obtained by sequentially executing the operations on all the subsequent frames in the video.
The invention mainly aims at a rapid face tracking method based on face detection, and can improve the accuracy of face tracking and reduce the influence of shielding and illumination on the face as much as possible by jointly calculating the matching degree of face position information and face appearance characteristic information; compared with a large face feature extraction network, the network can save the calculation time and can provide enough face information for face matching; by providing a method of Hamming distance plus optimized mapping hash to replace the traditional calculation of Euclidean distance, the improvement can greatly reduce the calculation time; by adopting the cascaded association matching scheme, the tracking precision can be improved, and the problems of face loss and recurrence can be effectively solved.
Although the embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. A rapid face tracking method based on face detection is characterized by comprising the following steps:
s1: inputting a current frame image, carrying out face detection on the current frame image to obtain a face set D of the current frame, obtaining a track set T cut to a previous frame before face tracking, and then calculating the association degree of each face in the set D and each track in the track T;
s2: matching, namely calculating the correlation matrix, performing binary matching by using a Kuhn-Munkres algorithm, evaluating the tracks in the track set T according to the quality of the tracks, and dividing the tracks into high-quality tracks and low-quality tracks;
s3: after the above process, the face tracking result on the current frame is obtained, and the face track on the whole video can be obtained by sequentially executing the above operations on all the subsequent frames in the video.
2. The fast face tracking method based on face detection according to claim 1, characterized in that: in S1, the correlation calculation needs to take into account the characteristic distance between the two, the center point offset between the two positions, and the shape change between the two positions.
3. The fast face tracking method based on face detection according to claim 1, characterized in that: in S1, when calculating the degree of association between the face of the current frame and the face in the trajectory, the position information of the face and the feature information of the face need to be considered, and the feature information of the face is extracted by the face re-recognition network.
4. The fast face tracking method based on face detection according to claim 1, characterized in that: in S2, an attempt is made to match each face in D with the high quality trajectory in T, and then faces that are not successfully matched will remain in D, and these faces are matched with the low quality trajectory in T.
5. The fast face tracking method based on face detection as claimed in claim 4, wherein: after the matching process is finished, if faces which are not successfully matched exist in the D, the faces are regarded as newly added faces, and the faces are taken as a starting point of a new face track and are added into the track set T.
6. The fast face tracking method based on face detection as claimed in claim 5, wherein: if the track which is not successfully matched exists in the T, the track cannot be directly considered to be finished, a threshold value is set for the number of frames continuously lost by the track, if the number of frames continuously and unsuccessfully matched by the track exceeds the threshold value, the track is considered to be finished indeed, otherwise, the position of a human face is predicted for the track by using a Kalman filtering method, the position is taken as a track result on the current frame, and after the process, the human face tracking result on the current frame is obtained.
CN201910911024.5A 2019-09-25 2019-09-25 Rapid face tracking method based on face detection Pending CN110688940A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910911024.5A CN110688940A (en) 2019-09-25 2019-09-25 Rapid face tracking method based on face detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910911024.5A CN110688940A (en) 2019-09-25 2019-09-25 Rapid face tracking method based on face detection

Publications (1)

Publication Number Publication Date
CN110688940A true CN110688940A (en) 2020-01-14

Family

ID=69110612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910911024.5A Pending CN110688940A (en) 2019-09-25 2019-09-25 Rapid face tracking method based on face detection

Country Status (1)

Country Link
CN (1) CN110688940A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626232A (en) * 2020-05-29 2020-09-04 广州云从凯风科技有限公司 Disinfection method, system, equipment and medium based on biological recognition characteristics
CN111667512A (en) * 2020-05-28 2020-09-15 浙江树人学院(浙江树人大学) Multi-target vehicle track prediction method based on improved Kalman filtering
CN111862624A (en) * 2020-07-29 2020-10-30 浙江大华技术股份有限公司 Vehicle matching method and device, storage medium and electronic device
CN112070071A (en) * 2020-11-11 2020-12-11 腾讯科技(深圳)有限公司 Method and device for labeling objects in video, computer equipment and storage medium
CN112163568A (en) * 2020-10-28 2021-01-01 成都中科大旗软件股份有限公司 Scenic spot person searching system based on video detection
CN112883819A (en) * 2021-01-26 2021-06-01 恒睿(重庆)人工智能技术研究院有限公司 Multi-target tracking method, device, system and computer readable storage medium
CN116245866A (en) * 2023-03-16 2023-06-09 深圳市巨龙创视科技有限公司 Mobile face tracking method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971092A (en) * 2014-04-09 2014-08-06 中国船舶重工集团公司第七二六研究所 Facial trajectory tracking method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971092A (en) * 2014-04-09 2014-08-06 中国船舶重工集团公司第七二六研究所 Facial trajectory tracking method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHAOQUN CHU ET AL.: "Optimized projection for hashing", 《PATTERN RECOGNITION LETTERS》 *
FENGWEI YU ET AL.: "POI: Multiple Object Tracking with High Performance Detection and Appearance Feature", 《SPRINGER INTERNATIONAL PUBLISHING SWITZERLAND 2016》 *
NICOLAI WOJKE ET AL.: "SIMPLE ONLINE AND REALTIME TRACKING WITH A DEEP ASSOCIATION METRIC", 《ARXIV》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667512B (en) * 2020-05-28 2024-04-09 浙江树人学院(浙江树人大学) Multi-target vehicle track prediction method based on improved Kalman filtering
CN111667512A (en) * 2020-05-28 2020-09-15 浙江树人学院(浙江树人大学) Multi-target vehicle track prediction method based on improved Kalman filtering
CN111626232A (en) * 2020-05-29 2020-09-04 广州云从凯风科技有限公司 Disinfection method, system, equipment and medium based on biological recognition characteristics
CN111626232B (en) * 2020-05-29 2021-07-30 广州云从凯风科技有限公司 Disinfection method, system, equipment and medium based on biological recognition characteristics
CN111862624A (en) * 2020-07-29 2020-10-30 浙江大华技术股份有限公司 Vehicle matching method and device, storage medium and electronic device
CN111862624B (en) * 2020-07-29 2022-05-03 浙江大华技术股份有限公司 Vehicle matching method and device, storage medium and electronic device
CN112163568A (en) * 2020-10-28 2021-01-01 成都中科大旗软件股份有限公司 Scenic spot person searching system based on video detection
CN112070071A (en) * 2020-11-11 2020-12-11 腾讯科技(深圳)有限公司 Method and device for labeling objects in video, computer equipment and storage medium
CN112070071B (en) * 2020-11-11 2021-03-26 腾讯科技(深圳)有限公司 Method and device for labeling objects in video, computer equipment and storage medium
CN112883819A (en) * 2021-01-26 2021-06-01 恒睿(重庆)人工智能技术研究院有限公司 Multi-target tracking method, device, system and computer readable storage medium
CN112883819B (en) * 2021-01-26 2023-12-08 恒睿(重庆)人工智能技术研究院有限公司 Multi-target tracking method, device, system and computer readable storage medium
CN116245866B (en) * 2023-03-16 2023-09-08 深圳市巨龙创视科技有限公司 Mobile face tracking method and system
CN116245866A (en) * 2023-03-16 2023-06-09 深圳市巨龙创视科技有限公司 Mobile face tracking method and system

Similar Documents

Publication Publication Date Title
CN110688940A (en) Rapid face tracking method based on face detection
Gao et al. Note-rcnn: Noise tolerant ensemble rcnn for semi-supervised object detection
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN110163127A (en) A kind of video object Activity recognition method from thick to thin
WO2020151166A1 (en) Multi-target tracking method and device, computer device and readable storage medium
CN110516556A (en) Multi-target tracking detection method, device and storage medium based on Darkflow-DeepSort
CN109671102B (en) Comprehensive target tracking method based on depth feature fusion convolutional neural network
CN107633226B (en) Human body motion tracking feature processing method
CN112288773A (en) Multi-scale human body tracking method and device based on Soft-NMS
CN108304808A (en) A kind of monitor video method for checking object based on space time information Yu depth network
CN110751674A (en) Multi-target tracking method and corresponding video analysis system
Zhai et al. Action coherence network for weakly-supervised temporal action localization
WO2007047461A9 (en) Bi-directional tracking using trajectory segment analysis
WO2022142417A1 (en) Target tracking method and apparatus, electronic device, and storage medium
CN105930790A (en) Human body behavior recognition method based on kernel sparse coding
CN111242985B (en) Video multi-pedestrian tracking method based on Markov model
CN111931571B (en) Video character target tracking method based on online enhanced detection and electronic equipment
An et al. Online RGB-D tracking via detection-learning-segmentation
Yan et al. A lightweight weakly supervised learning segmentation algorithm for imbalanced image based on rotation density peaks
CN114926859A (en) Pedestrian multi-target tracking method in dense scene combined with head tracking
CN116883457B (en) Light multi-target tracking method based on detection tracking joint network and mixed density network
Zhang Sports action recognition based on particle swarm optimization neural networks
Peng et al. Tracklet siamese network with constrained clustering for multiple object tracking
CN105956113B (en) Video data digging High Dimensional Clustering Analysis method based on particle group optimizing
CN112613472B (en) Pedestrian detection method and system based on deep search matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200114