CN110969646A - Face tracking method adaptive to high frame rate - Google Patents

Face tracking method adaptive to high frame rate Download PDF

Info

Publication number
CN110969646A
CN110969646A CN201911224810.4A CN201911224810A CN110969646A CN 110969646 A CN110969646 A CN 110969646A CN 201911224810 A CN201911224810 A CN 201911224810A CN 110969646 A CN110969646 A CN 110969646A
Authority
CN
China
Prior art keywords
face
thread
frame rate
calculating
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911224810.4A
Other languages
Chinese (zh)
Inventor
廖士钞
杨霞
郭文生
高扬
向蓓蓓
卢秀台
张冯博
古涛铭
李南铮
钱智成
瞿元
黄一
潘文睿
熊宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201911224810.4A priority Critical patent/CN110969646A/en
Publication of CN110969646A publication Critical patent/CN110969646A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention discloses a face tracking method adaptive to a high frame rate, which is applied to the field of image recognition, and aims at solving the problem that the face tracking speed cannot be synchronous with the frame rate of a camera device due to the lack of computing power of the existing terminal device.

Description

Face tracking method adaptive to high frame rate
Technical Field
The invention relates to the field of computer vision and image processing, in particular to a face tracking method suitable for a high frame rate.
Background
The face tracking technology can track and record the face activities in the continuous video, so that important information about personnel access frequency, personnel flow characteristics and the like is obtained, important references can be provided for tasks such as modern security, resource optimization configuration, business information acquisition and intelligent management, and the face tracking technology has high research significance and application value. Under different environmental characteristics and business requirements, the face tracking can provide rich functions. For example, in large shopping malls, supermarkets and other occasions, the time and the consumption condition of the same customer staying in different areas of the shopping mall or the flow of people of the customer attracted by different commodities can be obtained through the face tracking technology, so that the consumption characteristics of certain types of customers and the consumption demand degree of the customers for different commodities are known, the commodity input quantity and the commodity placing positions of the commodities are reasonably adjusted, the goods sale stagnation quantity is reduced, and the shopping environment is more humanized. In addition, in a public area, the behavior characteristics of a specific crowd can be analyzed through a face tracking technology, so that vicious behaviors such as crimes and the like can be avoided and prevented in advance.
The face tracking technology is mainly used for analyzing the movement condition of a specific face in a continuous video stream, comprises two parts of tracking and identification, and needs to quickly identify a face area and record face characteristics, and the process is repeated to finish continuous tracking so as to obtain the movement condition of the specific face in the video. With the rapid development of artificial intelligence technology, technical schemes in the field of face tracking are emerging continuously, the existing face tracking schemes can be roughly divided into two categories, one is face tracking by using a traditional machine learning method and a pattern recognition method, and the face tracking method has the advantages of high recognition speed, low dependence degree on computing power and poor recognition accuracy and cannot effectively distinguish recognized faces. And secondly, the deep learning training neural network is utilized to complete face recognition, and the face recognition result is utilized to track the face, so that the method has the advantages that the face tracking accuracy is very high, different faces can be effectively distinguished, but because the real-time performance of the method is very dependent on computational power, if the face tracking is required to be synchronously carried out with the video acquisition frame rate, very expensive computational resources are often required. The requirement on an efficient and cheap video analysis scheme is also provided when video information is collected cheaply, so that the scheme for realizing accurate face tracking without consuming expensive calculation resources is provided in the field of face tracking and has important significance.
Disclosure of Invention
In order to solve the above technical problem, the present invention provides a face tracking method adapted to a high frame rate, which ensures that face tracking can be performed synchronously with the frame rate under the condition of limited computing power of a terminal device.
The technical scheme adopted by the invention is as follows: the time-consuming process of face recognition and feature extraction is separated from the process of face region prediction, so that the processes are completed asynchronously, the motion vector is calculated and maintained only through the time-consuming process, and meanwhile, the position of the face region is rapidly predicted through the motion vector, so that the synchronization of the prediction interval of the position of the face region and the video refresh frame rate is ensured. The steps are as follows:
and initializing the face tracking system. Frame buffer queues, namely frameQueue, outFrameQueue, face area information table faceInfoList and faceInfoList new are mainly created, a face recognition module and a face feature extraction module are loaded, and an image drawing thread showThread is started. And the image frames acquired from the video stream are placed in a frameQueue, and the speed is consistent with the video frame rate. showThread takes out the frame image from outFrameQueue and draws, and the speed is consistent with the video frame rate.
And carrying out detection initialization. The first frame in the video stream is taken as an initial detection frame and recorded as frame0, five-point coordinates of the two eyes, the nose and the two sides of the lips of all the faces in the frame0 are obtained through a face recognition module, rectangular coordinates of the area where the face is located and the face orientation are calculated through the five-point coordinates, a motion vector dicVec is initialized, and information is stored in faceInfoList. The process is a synchronous blocking process, and the system responds to other processing after finishing the step.
The calculation process of the face recognition module is divided into two steps, firstly, LBP characteristics are calculated for the whole video frame image, gray level changes are calculated according to the calculated LBP characteristics, regions with non-drastic changes are abandoned, and the remaining image regions are reserved. And then, sending the rest image areas into an MTCNN face recognition neural network to obtain face coordinates.
And performing feature extraction initialization. The initialization of feature extraction is completed by a feature extraction module, and the specific steps are as follows: and traversing the faceInfoList, acquiring a face region image, calculating an LBP histogram vector of the image, storing the LBP histogram vector as lowFaceFeture, extracting a feature vector through a neural network, and storing the feature vector as highFaceFeture. And finally, establishing a label for each face area and storing all information into faceInfoList. This process is also a synchronous blocking process, and the system responds to other processes after completing this step.
The face tracking process formally starts. The system creates one compute thread and two sub-thread groups. One thread group repeatedly performs face recognition calculation, and the repeated steps are as follows: and taking out a frame of image from the frame buffer queue, calculating face coordinates through a face recognition module, and updating information to faceInfoListNew.
And the other thread group repeatedly performs the characteristic identification matching, and the repeated steps are as follows: traversing faceInfoListNew, acquiring a face area image, calculating low faceInfoList through a feature extraction module, then performing cosine similarity comparison with low faceInfoList in the faceInfoList, updating faceInfoList area information if the similarity is high, otherwise extracting a feature vector highFaceFeture through a neural network, performing cosine similarity comparison again through the feature vector, and adding information to the faceInfoList if the similarity is low in the two comparisons. And after the comparison is completed, deleting the objects of the updated information in the faceInfoList.
And the computing thread acquires the coordinate information of each face area twice from the faceInfoList, if the coordinate information is updated by a sub-thread group, the motion vector dicVec is corrected through the coordinate movement data of the two times, and if the coordinate information is not updated, the dicVec is kept unchanged. And then calculating the position of the face region to be moved through the dicvec, recording the tracking result, drawing an image and putting the image into a frame buffer queue outFrameQueue.
The processing speed and the frame rate of the computing thread are the same, and the computing thread and the two sub-thread groups are executed in parallel without waiting for the processing results of the sub-thread groups. The face recognition thread group and the feature extraction thread group are alternately parallel, namely data are processed in parallel, but face recognition is kept before each cycle period, and features are extracted later.
The invention has the beneficial effects that: the human face tracking method suitable for the high frame rate predicts the position of the human face area by using the motion vector, so that the image processing speed can be kept consistent with the frame rate under the condition that the calculation force is not supported, and meanwhile, the asynchronous accurate calculation corrects the error of the motion vector, so that the accuracy of the whole tracking result is ensured.
Drawings
FIG. 1 is an overall flow chart provided by an embodiment of the present invention;
FIG. 2 is a block diagram of an overall architecture of a tracking method according to an embodiment of the present invention;
FIG. 3 is an overall timing diagram of face tracking according to an embodiment of the present invention;
FIG. 4 is a flow chart of a face recognition module according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a coordinate system provided by an embodiment of the present invention;
FIG. 6 is a flow diagram of a feature extraction module provided by an embodiment of the present invention;
FIG. 7 is a flow diagram of a computing module provided by an embodiment of the present invention;
Detailed Description
In order to facilitate the understanding of the technical contents of the present invention by those skilled in the art, the present invention will be further explained with reference to the accompanying drawings.
The face tracking method suitable for the high frame rate is composed of four core modules, including a face recognition module, a feature extraction module, a calculation module and a drawing module. The different modules are executed in parallel in different threads, data processing and information interaction are completed by two frame buffer queues, namely, frameQueue and outFrameQueue, and two face area information lists, namely faceInfoList and faceInfoList new, the overall flowchart is shown in fig. 1, and the architecture diagram is shown in fig. 2.
The frameQueue queue stores original frame images from a video stream, the head of the queue is a current frame, the outFrameQueue is a frame image on which face area information is drawn, the head of the queue is the current frame, and the access speeds of the two queues are consistent with the frame rate of the video stream. The faceInfoList stores the face area information at the previous moment, the whole face tracking process is kept all the time, the face area prediction is based on the face area information, and the faceInfoListNew stores the accurate face area information obtained through time-consuming calculation and is only used for updating the data in the faceInfoList and is reset after each accurate calculation period. The modules and the processing sequence are described in detail below.
1. Integral run sequence
The initialization of the module is performed first, and the initialization process is described in detail in the summary of the invention and will not be described herein again. After initialization is completed, the system is divided into a main thread, a calculation thread, a face recognition sub-thread group and a feature extraction sub-thread group. The number of threads in the sub-thread group is determined by the face density and the terminal computing power in a specific operating environment.
The calculation thread takes out images from the frameQueue according to the frame rate, the calculation result of the face recognition and feature extraction sub-thread is not waited, the faceInfoList and the motion vector dictVec are directly used for calculating to obtain the face region information, therefore, the acquisition and drawing of the face region information can be matched with the video frame rate, the time-consuming face recognition and feature extraction process only corrects the faceInfoList and the dictVec when one running period is completed, and the real-time performance is achieved without strong calculation capacity. The overall runtime diagram is shown in fig. 3.
2. Face recognition module
The face recognition module adopts an improved rapid face recognition method, and the detailed flow is as follows: firstly, graying the image, then processing the image by adopting a 3 × 3 LBP operator to obtain the processed LBP image, wherein the processing method of the LBP operator is a general method in the industry and is not described herein again. And then calculating the gray value change frequency of each 150 × 150 pixel area of the LBP image, recording the pixel coordinates of the area if the frequency change is severe, neglecting the area image if the frequency change is mild, then intercepting the area from the original color image according to the reserved LBP image area coordinates, sending the intercepted image to an MTCNN face recognition network, recording the output coordinates of 5 points of the eyes, the nose, the left lip and the right lip if the recognition is successful, and calculating a motion vector dictVec. Because the region screening is carried out in advance, the subsequent calculation speed of the MTCNN is greatly accelerated. The whole process flow is shown in fig. 4.
And in the initialization process, storing the motion vector and the 5-point coordinate to faceInfoList, and storing the information to faceInfoListNew when the sub-thread group runs.
3. Motion vector calculation
The motion vector is used for predicting the motion direction and the motion rate of the face area and calculating the position of the face area at the current moment. The motion vector is a three-dimensional space vector, the upper left corner of the image is taken as an origin, the positive direction of the x axis is from the origin to the right, the positive direction of the y axis is from the origin to the lower, the square of the z axis is from the origin to the back, and the coordinate system is shown in fig. 5. The meaning of the z axis is scaling, the value range is-20 to 20, negative values are amplification, and positive values are reduction. And obtaining the coordinates of the face region at the current moment after horizontal displacement by the dot product of the coordinates of the face region and the x and y components of the motion vector.
And initializing a motion vector diceVec, recording a connecting line between the left eye and the left lip corner as L1, recording a connecting line between the right eye and the right lip corner as L2, recording a vertical distance between the right eye and the L1 as H1 and a vertical distance between the right eye and the L2 as H2 by taking the nose head coordinate obtained by face recognition as a reference. If the ratio of H1 to H2 is less than 1:2, then the dictVec is initialized to the left with an amplitude of 2 pixels, if the ratio of H1 to H2 is greater than 2:1, then the dictVec is initialized to the right with an amplitude of 2 pixels, otherwise the initial dictVec is forward with an amplitude of-0.2.
In the tracking process, the value of dicvec is corrected by calculating the displacement change of the 5-point coordinates of the face at the last moment and the 5-point coordinates of the face identified at the moment.
4. Feature extraction module
The feature extraction is mainly used for determining the mark to which the face region belongs, the feature types are low facefeature and high facefeature, the low facefeature is a vector formed by LBP histograms and is obtained by calculating images corresponding to face region information in faceInfoList and faceInfoListNew, the histogram calculation method is consistent with an intra-industry general method, the high facefeature is a feature vector extracted by a neural network, the neural network is a 27-layer ResNet network and is formed by training an A-SoftMax loss function, and the dimensionality of the output feature vector is 1024.
The similarity of features in faceInfoList and faceInfoList new needs to be compared to delete or update the face token. The similarity between the features is measured by cosine similarity, if the cosine similarity is more than 80%, the features belong to the same feature mark, and if the cosine similarity is less than 30%, the features do not belong to the same mark. In order to accelerate the calculation speed, the lowFaceFeature is used for pre-comparison, if the relationship is met, further comparison is not carried out, and otherwise, the highFaceFeature is extracted through the neural network for further comparison. And after the similarity is compared, the information in the faceInfoList is updated or added, and the face marks which are not updated in the faceInfoList in one identification period are deleted. The flow chart is shown in fig. 6.
The initialization process needs to calculate and store the lowFaceFeature and highFaceFeature of each face area, and when the face area runs in the sub-thread group, the calculation is performed according to the above process.
5. Calculation module and rendering module
The computing module runs in a computing thread, the processing speed is consistent with the frame rate, the computing thread acquires the coordinate information of each face area twice from the faceInfoList, if the coordinate information is updated by a sub-thread group, the motion vector dicVec is corrected through the coordinate movement data of the two times, and if the coordinate information is not updated, the dicVec is kept unchanged. And then calculating the position of the face region to be moved through the dicvec, recording the tracking result, drawing an image and putting the image into a frame buffer queue outFrameQueue. The flow chart is shown in fig. 7. And the drawing module runs in a main thread and is responsible for taking out the image from the outFrameQueue and drawing the image to a display interface.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (7)

1. A face tracking method adapting to a high frame rate is characterized by comprising the following steps:
and S1, performing rapid face recognition.
And S2, describing the face region information, and performing marking and motion prediction.
And S3, constructing and maintaining a face region information table, and marking the face position and the face label in real time by using the information table, wherein the marking frequency and the frame rate are synchronous.
2. The method as claimed in claim 1, wherein the step S1 specifically comprises:
and S11, calculating LBP characteristics for the current frame image of the video.
And S12, calculating the gray level change of the LBP characteristics, abandoning the area with the non-drastic change, and reserving the residual image.
And S13, processing the images reserved after the step S12 by utilizing an MTCNN face recognition network to obtain the coordinates of five points on two sides of eyes, nose and lips of the face.
3. The method as claimed in claim 1, wherein the step S2 specifically comprises:
and S21, establishing a faceInfo of the face area information structure object.
S22, establishing a motion vector dictVec, calculating the distance proportion between the two sides of the eyes and the lips and the nose by taking the coordinates of the nose of the human face as a reference, initializing the dictVec to the left if the proportion of the left side is greatly smaller than that of the right side, initializing the dictVec to the right if the proportion of the right side is greatly smaller than that of the left side, and storing the dictVec to faceInfo if the proportion of the initial dictVec is forward.
S23, calculating LBP histogram vector of each face, saving the LBP histogram vector as lowFaceFeture, and saving the lowFaceFeture to faceInfo.
4. The method as claimed in claim 1, wherein the step S3 specifically comprises:
and S31, establishing a face area information table faceInfoList.
And S32, establishing a frame buffer queue and reserving the image frames collected from the camera.
And S33, starting a calculation thread, a face recognition thread and a feature extraction thread.
And S34, taking out the image from the frame buffer queue by the computing thread at the same frequency as the frame rate, repairing the motion vector dictVec, updating the position of the face area through the dictVec and drawing the display image.
S35, the face recognition thread circularly repeats to take out the current frame from the frame buffer queue, and performs the step S1 recited in claim 1, and updates the five-point coordinates of the face obtained in the step S13 to faceInfoList.
S36, calculating LBP histogram vector lowFaceFeture of each face area by the feature extraction thread, extracting high FaceFeture of the face features by using a CNN neural network, updating face marks by using the lowFaceFeture and the high FaceFeture, and removing repeated data.
5. The method for tracking a human face adaptive to a high frame rate as claimed in claim 4, wherein the processing frequency of the face recognition thread in step S35 is not synchronized with the refresh frame rate, and a processing flow processes a frame of image, each processing only the current frame.
6. The method as claimed in claim 4, wherein the step S36 is specifically as follows:
a1, synchronously taking out images from the frame buffer queue with the frame rate, calculating the LBP histogram vector lowFaceFeature of the face area, calculating the cosine similarity between every two areas by using the lowFaceFeature, if the similarity is more than 80%, updating the information of one area, and if the similarity is less than 80% and more than 30%, executing the step A2.
A2, extracting human face features highFaceFeatures by using a ResNet neural network, calculating cosine similarity of the highFaceFeatures between every two regions, if the similarity is more than 80%, updating information of one region, otherwise, not processing, performing the step asynchronously, and the calculation frequency is not synchronous with the frame rate.
7. The method as claimed in claim 4, wherein the face recognition thread and the feature extraction thread are alternatively parallel, that is, the data is processed in parallel but face recognition is performed before and the feature extraction is performed after each cycle.
CN201911224810.4A 2019-12-04 2019-12-04 Face tracking method adaptive to high frame rate Pending CN110969646A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911224810.4A CN110969646A (en) 2019-12-04 2019-12-04 Face tracking method adaptive to high frame rate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911224810.4A CN110969646A (en) 2019-12-04 2019-12-04 Face tracking method adaptive to high frame rate

Publications (1)

Publication Number Publication Date
CN110969646A true CN110969646A (en) 2020-04-07

Family

ID=70032883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911224810.4A Pending CN110969646A (en) 2019-12-04 2019-12-04 Face tracking method adaptive to high frame rate

Country Status (1)

Country Link
CN (1) CN110969646A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245866A (en) * 2023-03-16 2023-06-09 深圳市巨龙创视科技有限公司 Mobile face tracking method and system
CN116257139A (en) * 2023-02-27 2023-06-13 荣耀终端有限公司 Eye movement tracking method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108205666A (en) * 2018-01-21 2018-06-26 山东理工大学 A kind of face identification method based on depth converging network
US20190130167A1 (en) * 2017-10-28 2019-05-02 Altumview Systems Inc. Enhanced face-detection and face-tracking for resource-limited embedded vision systems
CN110069983A (en) * 2019-03-08 2019-07-30 深圳神目信息技术有限公司 Vivo identification method, device, terminal and readable medium based on display medium
CN110232307A (en) * 2019-04-04 2019-09-13 中国石油大学(华东) A kind of multi-frame joint face recognition algorithms based on unmanned plane
CN110263691A (en) * 2019-06-12 2019-09-20 合肥中科奔巴科技有限公司 Head movement detection method based on android system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190130167A1 (en) * 2017-10-28 2019-05-02 Altumview Systems Inc. Enhanced face-detection and face-tracking for resource-limited embedded vision systems
CN108205666A (en) * 2018-01-21 2018-06-26 山东理工大学 A kind of face identification method based on depth converging network
CN110069983A (en) * 2019-03-08 2019-07-30 深圳神目信息技术有限公司 Vivo identification method, device, terminal and readable medium based on display medium
CN110232307A (en) * 2019-04-04 2019-09-13 中国石油大学(华东) A kind of multi-frame joint face recognition algorithms based on unmanned plane
CN110263691A (en) * 2019-06-12 2019-09-20 合肥中科奔巴科技有限公司 Head movement detection method based on android system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QIFENG SHEN等: "Person Tracking and Frontal Face Capture with UAV", 《2018 IEEE 18TH INTERNATIONAL CONFERENCE ON COMMUNICATION TECHNOLOGY (ICCT)》 *
任梓涵等: "基于视觉跟踪的实时视频人脸识别", 《厦门大学学报(自然科学版)》 *
董胜: "基于人脸区域特征相关性的视频流人脸识别系统设计与实现", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116257139A (en) * 2023-02-27 2023-06-13 荣耀终端有限公司 Eye movement tracking method and electronic equipment
CN116257139B (en) * 2023-02-27 2023-12-22 荣耀终端有限公司 Eye movement tracking method and electronic equipment
CN116245866A (en) * 2023-03-16 2023-06-09 深圳市巨龙创视科技有限公司 Mobile face tracking method and system
CN116245866B (en) * 2023-03-16 2023-09-08 深圳市巨龙创视科技有限公司 Mobile face tracking method and system

Similar Documents

Publication Publication Date Title
WO2020167581A1 (en) Method and apparatus for processing video stream
CN109919977B (en) Video motion person tracking and identity recognition method based on time characteristics
Zhou et al. AAM based face tracking with temporal matching and face segmentation
US8620026B2 (en) Video-based detection of multiple object types under varying poses
US9953215B2 (en) Method and system of temporal segmentation for movement analysis
CN102270346B (en) Method for extracting target object from interactive video
Li et al. Saliency model-based face segmentation and tracking in head-and-shoulder video sequences
Kumar et al. Learning-based approach to real time tracking and analysis of faces
CN108280844B (en) Video target positioning method based on area candidate frame tracking
Chetverikov et al. Dynamic texture as foreground and background
CN101470809A (en) Moving object detection method based on expansion mixed gauss model
Li et al. A data-driven approach for facial expression retargeting in video
CN110969646A (en) Face tracking method adaptive to high frame rate
CN114170570A (en) Pedestrian detection method and system suitable for crowded scene
Lin et al. Temporally coherent 3D point cloud video segmentation in generic scenes
Pavlov et al. Application for video analysis based on machine learning and computer vision algorithms
CN108764177A (en) A kind of moving target detecting method based on low-rank decomposition and expression combination learning
CN112507835A (en) Method and system for analyzing multi-target object behaviors based on deep learning technology
Duan et al. An approach to dynamic hand gesture modeling and real-time extraction
CN110929632A (en) Complex scene-oriented vehicle target detection method and device
Luo et al. Alignment and tracking of facial features with component-based active appearance models and optical flow
Karbasi et al. Real-time hand detection by depth images: A survey
Wu et al. Partially occluded head posture estimation for 2D images using pyramid HoG features
Xu et al. Saliency model based head pose estimation by sparse optical flow
Bhuvaneswari et al. TRACKING MANUALLY SELECTED OBJECT IN VIDEOS USING COLOR HISTOGRAM MATCHING.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200407

WD01 Invention patent application deemed withdrawn after publication