CN108734107B - Multi-target tracking method and system based on human face - Google Patents
Multi-target tracking method and system based on human face Download PDFInfo
- Publication number
- CN108734107B CN108734107B CN201810371205.9A CN201810371205A CN108734107B CN 108734107 B CN108734107 B CN 108734107B CN 201810371205 A CN201810371205 A CN 201810371205A CN 108734107 B CN108734107 B CN 108734107B
- Authority
- CN
- China
- Prior art keywords
- face
- tracker
- frame
- survival time
- human
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a multi-target tracking method and a system based on human faces, wherein a tracker revival mechanism, a face selection mechanism and a human face buffer mechanism are introduced into the existing human face tracking technology, in the video processing process, after the human faces are detected, tracking processing is carried out, and in the tracking process, one or more human faces with the best quality are selected to represent the human faces of a certain person. Finally, only a small number of face pictures with the best quality are generated for each face in the video, and the processing difficulty is reduced for the structural processing of the video.
Description
Technical Field
The invention relates to the technical field of face recognition, in particular to a multi-target tracking method and system based on faces.
Background
With the rapid development of artificial intelligence, the application scenarios surrounding face recognition technology are increasing continuously, such as face payment, and verification of identity and identity card in airports/high-speed rail stations. In the security system, the video structuralization processing taking the human face as the core can monitor people in dense public places such as banks, airports, markets and the like, and realizes automatic statistics of people flow, analysis based on human face attributes and automatic identification and tracking of specific people.
In actual monitoring, due to factors such as video quality, complex light environment and complex face angle, the face captured in the video has the conditions of blur, overlarge angle, too dark/too bright face and the like, and the accuracy of familiar analysis based on the face and the accuracy of identification and tracking for a specific person are directly influenced. On the other hand, if the real-time captured human face is directly subjected to attribute analysis, human face recognition and other processing, but tracking and face selection processing are not performed, a large number of human face pictures can be generated, people flow statistics cannot be performed, the computing capacity of the server is greatly increased, the processing efficiency is reduced, wrong results are very easily obtained, and the practicability of structured processing is reduced.
Disclosure of Invention
The invention provides a multi-target tracking method and system based on human faces, aiming at the technical problems in the prior art.
The technical scheme for solving the technical problems is as follows:
in one aspect, the invention provides a multi-target tracking method based on human faces, which comprises the following steps:
step 1, initializing a frame counter, acquiring video frames, detecting human faces, and establishing a tracker for each detected human face;
step 2, acquiring a next video frame, adding 1 to a frame counter, simultaneously judging whether a window position of the tracker has a face, and if so, updating the corresponding tracker according to the face correlation of a previous frame and a current frame;
step 3, judging whether the frame counter reaches a preset value, if so, resetting the frame counter and executing the step 4, otherwise, skipping to the step 2;
and 4, carrying out face detection again on the current video frame, simultaneously judging whether the detected face is positioned in the existing tracker window, if so, updating the corresponding tracker according to the face correlation of the previous frame and the current frame, otherwise, establishing a tracker for the newly detected face, and skipping to the step 2.
Further, the method also comprises the steps of establishing a buffer for each tracker when the tracker is created for the detected face, and initializing the survival time of the tracker, wherein the buffer is used for storing the quality of the face and the position information of the face in each video frame corresponding to the tracker in the survival time.
Further, for the length of survival of the tracker:
at tracker creation time, the length of survival time is set to 0;
adding 1 to the survival time length when processing one frame of video frame;
when a face is detected and a corresponding tracker is matched, the new survival time of the tracker is equal to the old survival time-2 frame counter preset value;
when a tracker tracks a face, the survival time of the tracker is reduced by 1 when the face quality reaches a preset threshold.
Further, the step 2 and the step 3 further include that whether the tracker meets the deletion condition is judged, and if yes, the tracker is deleted.
Further, the deletion condition includes that the survival time reaches an upper limit value.
Further, after the tracker deletes, obtaining the face picture buffered in the buffer of the tracker, and screening one or more high-quality faces according to the filtering condition; the filtering conditions include:
the credibility score of the face is greater than a credibility threshold;
the human face is judged as a positive face;
the ambiguity of the face > ambiguity threshold;
luminance > luminance threshold of face;
the area of the face > the area of the other faces.
On the other hand, the invention also provides a multi-target tracking system based on human faces, which comprises
The video frame acquisition module is used for acquiring video frames;
the frame counter is used for recording the number of the acquired video frames;
the face detection module is used for carrying out face detection on the acquired video frames;
the tracker creating module is used for creating a tracker for each human face detected by the human face detecting module;
and the tracker updating module is used for updating the corresponding tracker according to the face correlation of the previous frame and the current frame.
The system further comprises a buffer creating module, configured to create a buffer for each tracker when a tracker is created for a detected face, and initialize the survival time of the tracker, where the buffer is used to store the quality of the face and the position information of the face in each video frame corresponding to the tracker during the survival time.
Further, for the length of survival of the tracker:
at tracker creation time, the length of survival time is set to 0;
adding 1 to the survival time length when processing one frame of video frame;
when a face is detected and a corresponding tracker is matched, the new survival time of the tracker is equal to the old survival time-2 frame counter preset value;
when a tracker tracks a face, the survival time of the tracker is reduced by 1 when the face quality reaches a preset threshold.
The system further comprises a face screening module, a face image processing module and a face image processing module, wherein the face screening module is used for acquiring a face image buffered in a corresponding buffer after the tracking is finished, and screening one or more high-quality faces according to a filtering condition; the filtering conditions include:
the credibility score of the face is greater than a credibility threshold;
the human face is judged as a positive face;
the ambiguity of the face > ambiguity threshold;
luminance > luminance threshold of face;
the area of the face > the area of the other faces.
The invention has the beneficial effects that: in the video processing process, after the human face is detected, tracking processing is carried out, and one or more human faces with the best quality are selected in the tracking process to represent the human face of a certain person. Finally, only a small number of face pictures with the best quality are generated for each face in the video, and the processing difficulty is reduced for the structural processing of the video.
The invention introduces
1. Mechanism of revival
And establishing a quality index and an aging time parameter for each tracker, wherein when a tracker fails to track, the quality index and the aging time parameter are not deleted immediately but are enabled to live for a period of time, and when the survival time reaches an upper limit value, the deletion operation of the tracker is carried out. The method makes up the situation that the performance is poor due to tracking failure such as too fast face movement and the like caused by the shielding situation of the original object tracking algorithm.
2. Face selection mechanism
In order to guarantee that the attributes of the faces are captured and better results are obtained by face recognition, a series of face pictures are captured in the process of tracking a certain face, and after the tracking is finished, namely a tracker deletes, a plurality of faces with the best quality are selected according to set filtering conditions.
3. Buffer mechanism
A tracker quality check buffer mechanism is added which provides better continuous tracking performance.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a block diagram of the system of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with examples, which are set forth to illustrate, but are not to be construed to limit the scope of the invention.
Face detection and tracking is to determine the position of a certain face in a video or image sequence and to continuously track the face during the motion of the face to ensure that the face is the same person. The performance of face detection and tracking is of great significance to face image analysis, face recognition and video structuring.
Common face tracking algorithms include:
1. and performing real-time face detection on each frame of image to obtain face position information, and judging whether the image is the same person or not according to the relationship between the position of the face in the previous frame and the current face position so as to obtain the motion track of the face.
2. And combining human face detection and object tracking algorithms. That is, face detection is performed every several frames to obtain face position information, and the following several frames of images are predicted by an object tracking algorithm to obtain the motion trail of the face. Face detection is done periodically and the result is continuously corrected for predictions of object tracking.
In the above method, the former would consume computing resources and the operation speed is slow; the latter can be satisfied for simple scenes, such as large face, small angle and no mutual occlusion due to the limitation of a tracking algorithm; however, in most cases, like people in shopping malls, retail stores, and on the ordinary street, the human face angle is large, the difference between the motion amplitude and the speed is large, the human faces are shielded from each other, the quality of the human faces is uneven, the situation that the tracking of the human face track fails, the adjacent human faces interfere with each other, and the situation of misinformation occurs easily. The invention further optimizes on the basis of 2, evaluates the tracking state by adding parameters such as tracking quality and the like, and simultaneously adds a revival mechanism, thereby avoiding the situation of tracking failure under the condition of short-time shielding.
The invention provides a multi-target tracking method based on human faces on one hand, which comprises the following steps as shown in figure 1:
step 1, initializing a frame counter, acquiring video frames, detecting human faces, and establishing a tracker for each detected human face; MTCNN was used as a face detector for face detection.
The tracker we choose to be a compound tracker. The native tracker is encapsulated with advanced signal processing functions. The selectable core algorithm comprises the following steps:
1. kernel correlation filtering algorithm (KCF) algorithm
2. Median optical flow (median optical flow) algorithm
3. Tracking Learning Detection (Tracking Learning Detection) algorithm
Usually we choose KCF as the preferred algorithm. KCF is a discriminant tracking method, which generally trains a target detector during tracking, uses the target detector to detect whether the next frame predicted position is a target, and then uses the new detection result to update the training set to update the target detector. While the target detector is trained, the target area is generally selected as a positive sample, and the area around the target is a negative sample, although the area closer to the target is more likely to be a positive sample.
When the tracker is created for the detected face, a buffer is established for each tracker, and the survival time of the tracker is initialized, wherein the buffer is used for storing the quality of the face and the position information of the face in each video frame corresponding to the tracker in the survival time.
Length of survival for the tracker:
at tracker creation time, the length of survival time is set to 0;
adding 1 to the survival time length when processing one frame of video frame;
when a face is detected and a corresponding tracker is matched, the new survival time of the tracker is equal to the old survival time-2 frame counter preset value;
when a tracker tracks a face, the survival time of the tracker is reduced by 1 when the face quality reaches a preset threshold.
Step 2, acquiring a next video frame, adding 1 to a frame counter, simultaneously judging whether a window position of the tracker has a face, and if so, updating the corresponding tracker according to the face correlation of a previous frame and a current frame;
when judging whether the window position of the tracker has a face, matching the face with the tracker is carried out, and the matching process comprises the following steps:
A. finding out the center position, width and height of the face picture
B. Finding the center position of the tracker window, and the width and height
C. And if the center position of the face picture is positioned in the tracker window and the center position of the tracker window is positioned in the center position of the face, the matching of the face picture and the tracker window is proved.
And 3, judging whether the tracker meets the deleting condition or not, and deleting the tracker if the tracker meets the deleting condition. The deletion condition includes that the survival time reaches an upper limit value.
The native tracking algorithm is very easy to cause target tracking loss, namely, the tracker fails to track, when the tracker fails to track, the trackers need to be deleted, otherwise, hardware resources are consumed. In our hybrid tracker, to solve this problem, we use a quality judgment mechanism.
Tracker quality refers to: the method is used for evaluating the correlation between an object tracked in the previous frame and the object tracked in the previous frame, and the greater the correlation is, the more likely the same object is; the smaller the correlation, the more likely the tracker has been to have lost the object. Typically, the correlation is less than 7, and the tracker is considered to fail tracking.
When the tracking of the original tracker fails, the tracker is not deleted immediately; but instead continues to have the tracker live for a period of time.
During this time, if the tracker can match a subsequently detected face, we will attempt to revive the tracker. However, after the period of time and the survival time of the tracker reaches the upper limit value, the tracker still cannot match any face, and the face is permanently deleted.
Step 4, judging whether the frame counter reaches a preset value, if so, resetting the frame counter and executing the step 5, otherwise, skipping to the step 2; the default value of the frame counter is default to 5, namely, every time 5 video frames are acquired, face detection is carried out again;
and 5, carrying out face detection again on the current video frame, simultaneously judging whether the detected face is positioned in the existing tracker window, if so, updating the corresponding tracker according to the face correlation of the previous frame and the current frame, otherwise, establishing a tracker for the newly detected face, and skipping to the step 2.
After the tracker is deleted, acquiring the face pictures buffered in the buffer of the tracker, and screening one or more high-quality faces according to the filtering condition; the filtering conditions include:
the credibility score of the face is greater than a credibility threshold;
the human face is judged as a positive face;
the ambiguity of the face > ambiguity threshold;
luminance > luminance threshold of face;
the area of the face > the area of the other faces.
On the other hand, the invention also provides a multi-target tracking system based on human faces, as shown in fig. 2, comprising:
the video frame acquisition module is used for acquiring video frames;
the frame counter is used for recording the number of the acquired video frames;
the face detection module is used for carrying out face detection on the acquired video frames;
the tracker creating module is used for creating a tracker for each human face detected by the human face detecting module;
and the tracker updating module is used for updating the corresponding tracker according to the face correlation of the previous frame and the current frame.
The system further comprises a buffer creating module, configured to create a buffer for each tracker when a tracker is created for a detected face, and initialize the survival time of the tracker, where the buffer is used to store the quality of the face and the position information of the face in each video frame corresponding to the tracker during the survival time.
Further, for the length of survival of the tracker:
at tracker creation time, the length of survival time is set to 0;
adding 1 to the survival time length when processing one frame of video frame;
when a face is detected and a corresponding tracker is matched, the new survival time of the tracker is equal to the old survival time-2 frame counter preset value;
when a tracker tracks a face, the survival time of the tracker is reduced by 1 when the face quality reaches a preset threshold.
The system further comprises a face screening module, a face image processing module and a face image processing module, wherein the face screening module is used for acquiring a face image buffered in a corresponding buffer after the tracking is finished, and screening one or more high-quality faces according to a filtering condition; the filtering conditions include:
the credibility score of the face is greater than a credibility threshold;
the human face is judged as a positive face;
the ambiguity of the face > ambiguity threshold;
luminance > luminance threshold of face;
the area of the face > the area of the other faces.
The invention has the beneficial effects that: in the video processing process, after the human face is detected, tracking processing is carried out, and one or more human faces with the best quality are selected in the tracking process to represent the human face of a certain person. Finally, only a small number of face pictures with the best quality are generated for each face in the video, and the processing difficulty is reduced for the structural processing of the video.
The invention introduces
1. Mechanism of revival
And establishing a quality index and an aging time parameter for each tracker, wherein when a tracker fails to track, the quality index and the aging time parameter are not deleted immediately but are enabled to live for a period of time, and when the survival time reaches an upper limit value, the deletion operation of the tracker is carried out. The method makes up the situation that the performance is poor due to tracking failure such as too fast face movement and the like caused by the shielding situation of the original object tracking algorithm.
2. Face selection mechanism
In order to guarantee that the attributes of the faces are captured and better results are obtained by face recognition, a series of face pictures are captured in the process of tracking a certain face, and after the tracking is finished, namely a tracker deletes, a plurality of faces with the best quality are selected according to set filtering conditions.
3. Buffer mechanism
A tracker quality check buffer mechanism is added which provides better continuous tracking performance.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (6)
1. A multi-target tracking method based on human faces is characterized by comprising the following steps:
step 1, initializing a frame counter, acquiring video frames, detecting human faces, and establishing a tracker for each detected human face;
step 2, acquiring a next video frame, adding 1 to a frame counter, simultaneously judging whether a window position of the tracker has a face, and if so, updating the corresponding tracker according to the face correlation of a previous frame and a current frame;
step 3, judging whether the frame counter reaches a preset value, if so, resetting the frame counter and executing the step 4, otherwise, skipping to the step 2;
step 4, carrying out face detection again on the current video frame, simultaneously judging whether the detected face is positioned in the existing tracker window, if so, updating the corresponding tracker according to the face correlation of the previous frame and the current frame, otherwise, establishing a tracker for the newly detected face, and skipping to the step 2;
when trackers are created for detected faces, a buffer is established for each tracker, the survival time of the tracker is initialized, and the buffer is used for storing the quality of the faces and the position information of the faces in each video frame corresponding to the tracker in the survival period;
length of survival for the tracker:
at tracker creation time, the length of survival time is set to 0;
adding 1 to the survival time length when processing one frame of video frame;
when a face is detected and a corresponding tracker is matched, the new survival time of the tracker is equal to the old survival time-2 frame counter preset value;
when a tracker tracks a face, the survival time of the tracker is reduced by 1 when the face quality reaches a preset threshold.
2. The multi-target tracking method based on human faces as claimed in claim 1, further comprising, between the step 2 and the step 3, judging whether the tracker meets a deletion condition, and if so, deleting the tracker.
3. The multi-target tracking method based on human faces as claimed in claim 2, wherein the deletion condition includes that the survival time reaches an upper limit value.
4. The multi-target tracking method based on human faces according to claim 2, characterized in that after the tracker deletes, the face picture buffered in the buffer is obtained, and one or more high-quality human faces are screened according to the filtering condition; the filtering conditions include:
the credibility score of the face is greater than a credibility threshold;
the human face is judged as a positive face;
the ambiguity of the face > ambiguity threshold;
luminance > luminance threshold of face;
the area of the face > the area of the other faces.
5. A multi-target tracking system based on human face is characterized by comprising
The video frame acquisition module is used for acquiring video frames;
the frame counter is used for recording the number of the acquired video frames;
the face detection module is used for carrying out face detection on the acquired video frames;
the tracker creating module is used for creating a tracker for each human face detected by the human face detecting module;
the tracker updating module is used for updating the corresponding tracker according to the face correlation of the previous frame and the current frame;
the buffer creating module is used for creating a buffer for each tracker when the tracker is created for the detected face, and initializing the survival time of the tracker, wherein the buffer is used for storing the quality of the face and the position information of the face in each video frame corresponding to the tracker in the survival period;
length of survival for the tracker:
at tracker creation time, the length of survival time is set to 0;
adding 1 to the survival time length when processing one frame of video frame;
when a face is detected and a corresponding tracker is matched, the new survival time of the tracker is equal to the old survival time-2 frame counter preset value;
when a tracker tracks a face, the survival time of the tracker is reduced by 1 when the face quality reaches a preset threshold.
6. The system according to claim 5, further comprising a face screening module for obtaining the face pictures buffered in the corresponding buffer after the tracking is finished, and screening one or more high-quality faces according to the filtering condition; the filtering conditions include:
the credibility score of the face is greater than a credibility threshold;
the human face is judged as a positive face;
the ambiguity of the face > ambiguity threshold;
luminance > luminance threshold of face;
the area of the face > the area of the other faces.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810371205.9A CN108734107B (en) | 2018-04-24 | 2018-04-24 | Multi-target tracking method and system based on human face |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810371205.9A CN108734107B (en) | 2018-04-24 | 2018-04-24 | Multi-target tracking method and system based on human face |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108734107A CN108734107A (en) | 2018-11-02 |
CN108734107B true CN108734107B (en) | 2021-11-05 |
Family
ID=63939826
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810371205.9A Active CN108734107B (en) | 2018-04-24 | 2018-04-24 | Multi-target tracking method and system based on human face |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108734107B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109472247B (en) * | 2018-11-16 | 2021-11-30 | 西安电子科技大学 | Face recognition method based on deep learning non-fit type |
CN109543604A (en) * | 2018-11-21 | 2019-03-29 | 泰康保险集团股份有限公司 | Processing method, device, medium and the electronic equipment of video |
CN109886951A (en) * | 2019-02-22 | 2019-06-14 | 北京旷视科技有限公司 | Method for processing video frequency, device and electronic equipment |
CN110046548A (en) * | 2019-03-08 | 2019-07-23 | 深圳神目信息技术有限公司 | Tracking, device, computer equipment and the readable storage medium storing program for executing of face |
CN110414447B (en) | 2019-07-31 | 2022-04-15 | 京东方科技集团股份有限公司 | Pedestrian tracking method, device and equipment |
CN110874583A (en) * | 2019-11-19 | 2020-03-10 | 北京精准沟通传媒科技股份有限公司 | Passenger flow statistics method and device, storage medium and electronic equipment |
CN111178218B (en) * | 2019-12-23 | 2023-07-04 | 北京中广上洋科技股份有限公司 | Multi-feature joint video tracking method and system based on face recognition |
CN111274886B (en) * | 2020-01-13 | 2023-09-19 | 天地伟业技术有限公司 | Deep learning-based pedestrian red light running illegal behavior analysis method and system |
CN112541434B (en) * | 2020-12-14 | 2022-04-12 | 无锡锡商银行股份有限公司 | Face recognition method based on central point tracking model |
CN113283305B (en) * | 2021-04-29 | 2024-03-26 | 百度在线网络技术(北京)有限公司 | Face recognition method, device, electronic equipment and computer readable storage medium |
CN114241586B (en) * | 2022-02-21 | 2022-05-27 | 飞狐信息技术(天津)有限公司 | Face detection method and device, storage medium and electronic equipment |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101231703A (en) * | 2008-02-28 | 2008-07-30 | 上海交通大学 | Method for tracing a plurality of human faces base on correlate vector machine to improve learning |
CN101996310A (en) * | 2009-08-12 | 2011-03-30 | Tcl数码科技(深圳)有限责任公司 | Face detection and tracking method based on embedded system |
CN102496009A (en) * | 2011-12-09 | 2012-06-13 | 北京汉邦高科数字技术股份有限公司 | Multi-face tracking method for intelligent bank video monitoring |
CN102682453A (en) * | 2012-04-24 | 2012-09-19 | 河海大学 | Moving vehicle tracking method based on multi-feature fusion |
CN102857690A (en) * | 2011-06-29 | 2013-01-02 | 奥林巴斯映像株式会社 | Tracking apparatus, tracking method, shooting device and shooting method |
CN103476147A (en) * | 2013-08-27 | 2013-12-25 | 浙江工业大学 | Wireless sensor network target tracking method for energy conservation |
CN104318217A (en) * | 2014-10-28 | 2015-01-28 | 吴建忠 | Face recognition information analysis method and system based on distributed cloud computing |
CN106599836A (en) * | 2016-12-13 | 2017-04-26 | 北京智慧眼科技股份有限公司 | Multi-face tracking method and tracking system |
CN106971401A (en) * | 2017-03-30 | 2017-07-21 | 联想(北京)有限公司 | Multiple target tracking apparatus and method |
CN107085703A (en) * | 2017-03-07 | 2017-08-22 | 中山大学 | Merge face detection and the automobile passenger method of counting of tracking |
CN107273810A (en) * | 2017-05-22 | 2017-10-20 | 武汉神目信息技术有限公司 | A kind of method that Face datection interest region delimited in automatic study |
CN107590452A (en) * | 2017-09-04 | 2018-01-16 | 武汉神目信息技术有限公司 | A kind of personal identification method and device based on gait and face fusion |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9940505B2 (en) * | 2014-10-23 | 2018-04-10 | Alcohol Countermeasure Systems (International) Inc. | Method for driver face detection in videos |
-
2018
- 2018-04-24 CN CN201810371205.9A patent/CN108734107B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101231703A (en) * | 2008-02-28 | 2008-07-30 | 上海交通大学 | Method for tracing a plurality of human faces base on correlate vector machine to improve learning |
CN101996310A (en) * | 2009-08-12 | 2011-03-30 | Tcl数码科技(深圳)有限责任公司 | Face detection and tracking method based on embedded system |
CN102857690A (en) * | 2011-06-29 | 2013-01-02 | 奥林巴斯映像株式会社 | Tracking apparatus, tracking method, shooting device and shooting method |
CN102496009A (en) * | 2011-12-09 | 2012-06-13 | 北京汉邦高科数字技术股份有限公司 | Multi-face tracking method for intelligent bank video monitoring |
CN102682453A (en) * | 2012-04-24 | 2012-09-19 | 河海大学 | Moving vehicle tracking method based on multi-feature fusion |
CN103476147A (en) * | 2013-08-27 | 2013-12-25 | 浙江工业大学 | Wireless sensor network target tracking method for energy conservation |
CN104318217A (en) * | 2014-10-28 | 2015-01-28 | 吴建忠 | Face recognition information analysis method and system based on distributed cloud computing |
CN106599836A (en) * | 2016-12-13 | 2017-04-26 | 北京智慧眼科技股份有限公司 | Multi-face tracking method and tracking system |
CN107085703A (en) * | 2017-03-07 | 2017-08-22 | 中山大学 | Merge face detection and the automobile passenger method of counting of tracking |
CN106971401A (en) * | 2017-03-30 | 2017-07-21 | 联想(北京)有限公司 | Multiple target tracking apparatus and method |
CN107273810A (en) * | 2017-05-22 | 2017-10-20 | 武汉神目信息技术有限公司 | A kind of method that Face datection interest region delimited in automatic study |
CN107590452A (en) * | 2017-09-04 | 2018-01-16 | 武汉神目信息技术有限公司 | A kind of personal identification method and device based on gait and face fusion |
Non-Patent Citations (4)
Title |
---|
Robust multi-clue face tracking system;Yujia Cao 等;《Global Congress on Intelligent Systems》;20090821;513-517 * |
一种多人脸跟踪算法的研究与实现;周德华 等;《电视技术》;20050517;88-90 * |
基于MS- KCF模型的图像序列中人脸快速稳定检测;叶远征 等;《计算机应用》;20180413;第38卷(第8期);摘要,第1节,图1,图5,图7 * |
视频监控下的人脸跟踪与识别系统;桑海峰 等;《计算机工程与应用》;20130116;第50卷(第12期);175-179 * |
Also Published As
Publication number | Publication date |
---|---|
CN108734107A (en) | 2018-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108734107B (en) | Multi-target tracking method and system based on human face | |
CN108629299B (en) | Long-time multi-target tracking method and system combining face matching | |
CN109309811B (en) | High-altitude parabolic detection system and method based on computer vision | |
US8218818B2 (en) | Foreground object tracking | |
US8218819B2 (en) | Foreground object detection in a video surveillance system | |
WO2019083738A9 (en) | Methods and systems for applying complex object detection in a video analytics system | |
US20060056702A1 (en) | Image processing apparatus and image processing method | |
Ma et al. | Detecting Motion Object By Spatio-Temporal Entropy. | |
CN109063667B (en) | Scene-based video identification mode optimization and pushing method | |
US9053355B2 (en) | System and method for face tracking | |
CN113691733A (en) | Video jitter detection method and device, electronic equipment and storage medium | |
CN110598570A (en) | Pedestrian abnormal behavior detection method and system, storage medium and computer equipment | |
Haque et al. | Perception-inspired background subtraction | |
CN113011399B (en) | Video abnormal event detection method and system based on generation cooperative discrimination network | |
CN104077571A (en) | Method for detecting abnormal behavior of throng by adopting single-class serialization model | |
Kroneman et al. | Accurate pedestrian localization in overhead depth images via Height-Augmented HOG | |
Xu et al. | Feature extraction algorithm of basketball trajectory based on the background difference method | |
CN112184771A (en) | Community personnel trajectory tracking method and device | |
Zhang et al. | What makes for good multiple object trackers? | |
CN113836980A (en) | Face recognition method, electronic device and storage medium | |
Ranganarayana et al. | Modified ant colony optimization for human recognition in videos of low resolution | |
Liu et al. | Integrated multiscale appearance features and motion information prediction network for anomaly detection | |
CN114677651B (en) | Passenger flow statistical method based on low-image-quality low-frame-rate video and related device | |
Hu et al. | A new method of moving object detection and shadow removing | |
Huang et al. | Learning moving cast shadows for foreground detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |