CN108629283A - Face tracking method, device, equipment and storage medium - Google Patents
Face tracking method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN108629283A CN108629283A CN201810283645.9A CN201810283645A CN108629283A CN 108629283 A CN108629283 A CN 108629283A CN 201810283645 A CN201810283645 A CN 201810283645A CN 108629283 A CN108629283 A CN 108629283A
- Authority
- CN
- China
- Prior art keywords
- face
- posture information
- tracked
- tracker
- angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Geometry (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
A kind of face tracking method of disclosure offer, device, equipment and storage medium, the method includes:In current image frame, if two or more tracked faces meet default obstruction conditions, the target face posture information for the tracked face not being blocked is obtained;The target face posture information is compared with the sample human face posture information to prestore in tracker, determines tracker corresponding with the target face posture information;The tracked face not being blocked is distributed to identified tracker into line trace.Face tracking accuracy can be improved using the embodiment of the present disclosure.
Description
Technical field
This application involves target following technical field more particularly to face tracking method, device, equipment and storage mediums.
Background technology
Face tracking technology is a highly important module in the industries such as video processing, safety guarantee, computer intelligence,
The important informations such as target positioning, target prediction are provided for other technology modules.During track human faces, it is possible that
The case where two tracked faces block, and blocking mutually for face can cause tracking mistake, be influenced very on the accuracy of tracking
Greatly.Therefore, in this case, how to realize correctly track it is extremely important.
Currently, may be used Kalman filter and mean shift algorithm realize it is anti-block face tracking method, solve quilt
The face of tracking is easily by the problem of around similar coloration object interferes.And since different faces are more similar objects, work as screening
When the object for keeping off face is face, it would still be possible to the situation of tracking mistake occur.
Invention content
To overcome the problems in correlation technique, present disclose provides face tracking method, device, equipment and storages to be situated between
Matter.
According to the first aspect of the embodiments of the present disclosure, a kind of face tracking method is provided, the method includes:
In current image frame, if two or more be tracked faces meet preset obstruction conditions, obtain not by
The target face posture information of the tracked face blocked;
The target face posture information is compared with the sample human face posture information to prestore in tracker, determine with
The corresponding tracker of the target face posture information;
The tracked face not being blocked is distributed to identified tracker into line trace.
In an optional realization method, the default obstruction conditions include:
At least two tracked face overlappings;Or,
Distance between at least two tracked faces is less than or equal to pre-determined distance threshold value.
In an optional realization method, the tracked face for meeting default obstruction conditions includes:Be blocked by with
Track face and the tracked face not being blocked are each tracked face and are correspondingly provided with corresponding tracker, and the method is also wrapped
It includes:
If the quantity for the tracked face being blocked is for the moment, the tracked face being blocked is distributed to unassigned
Tracker is handled into line trace.
In an optional realization method, the sample that will be prestored in the target face posture information and tracker
Human face posture information is compared, and determines tracker corresponding with the target face posture information, including:
Determine the similitude of the sample human face posture information to prestore in target face posture information and each tracker;
Will the corresponding tracker of similar with the target face posture information highest sample human face posture information, be determined as
Tracker corresponding with the target face posture information.
In an optional realization method, human face posture information includes the angle spun upside down for being tracked face, a left side
The angle of right overturning and the angle of interior rotation;The similitude is based on prestoring in target face posture information and each tracker
The manhatton distance of sample human face posture information determines.
In an optional realization method, human face posture information includes being tracked the angle information of face and for characterizing
The predefined face key point information of human face posture, the angle information include the angle spun upside down, left and right overturning angle,
The angle of interior rotation;
The similitude of the sample human face posture information to prestore in the determining target face posture information and each tracker, packet
It includes:
Determine the manhatton distance of target angle vector and sample angle vector, the target angle vector is by the target
Angle information is constituted in human face posture information, and the sample angle vector is believed by angle in sample human face posture information in tracker
Breath is constituted;
Determine the included angle cosine distance of target critical point vector and sample key point vector, the target critical point vector by
It predefines face key point information in the target face posture information to constitute, the sample key point vector is by sample in tracker
Predefined face key point information in this face posture information is constituted;
The manhatton distance and the included angle cosine distance are weighted summation, obtain posture distance, the posture
Distance is used to indicate the similitude of target face posture information and sample human face posture information, and posture apart from smaller, get over by similitude
Greatly.
In an optional realization method, the predefined face key point information includes following two or a variety of:
The ratio of two oculocentric horizontal distances and two spacing;
Nose to left eye distance and nose to right eye distance ratio;
Nose to two centers distance and nose to the distance at the center of two corners of the mouths ratio.
According to the second aspect of the embodiment of the present disclosure, a kind of face tracking device is provided, described device includes:
Data obtaining module is configured as in current image frame, is met in advance if two or more are tracked face
If when obstruction conditions, obtaining the target face posture information for the tracked face not being blocked;
Tracker determining module is configured as the sample face that will be prestored in the target face posture information and tracker
Posture information is compared, and determines tracker corresponding with the target face posture information;
Face tracking module, be configured as by the tracked face not being blocked distribute to identified tracker into
Line trace.
In an optional realization method, the default obstruction conditions include:
At least two tracked face overlappings;Or,
Distance between at least two tracked faces is less than or equal to pre-determined distance threshold value.
In an optional realization method, the tracked face for meeting default obstruction conditions includes:Be blocked by with
Track face and the tracked face not being blocked are each tracked face and are correspondingly provided with corresponding tracker, the face tracking
Module is additionally configured to:
If the quantity for the tracked face being blocked is for the moment, the tracked face being blocked is distributed to unassigned
Tracker is handled into line trace.
In an optional realization method, the tracker determining module includes:
Similitude determination sub-module is configured to determine that the sample people to prestore in target face posture information and each tracker
The similitude of face posture information;
Tracker determination sub-module is configured as highest sample face appearance similar to the target face posture information
The corresponding tracker of state information is determined as tracker corresponding with the target face posture information.
In an optional realization method, human face posture information includes the angle spun upside down for being tracked face, a left side
The angle of right overturning and the angle of interior rotation;The similitude is based on prestoring in target face posture information and each tracker
The manhatton distance of sample human face posture information determines.
In an optional realization method, human face posture information includes being tracked the angle information of face and for characterizing
The predefined face key point information of human face posture, the angle information include the angle spun upside down, left and right overturning angle,
The angle of interior rotation;
The similitude determination sub-module, is specifically configured to:
Determine the manhatton distance of target angle vector and sample angle vector, the target angle vector is by the target
Angle information is constituted in human face posture information, and the sample angle vector is believed by angle in sample human face posture information in tracker
Breath is constituted;
Determine the included angle cosine distance of target critical point vector and sample key point vector, the target critical point vector by
It predefines face key point information in the target face posture information to constitute, the sample key point vector is by sample in tracker
Predefined face key point information in this face posture information is constituted;
The manhatton distance and the included angle cosine distance are weighted summation, obtain posture distance, the posture
Distance is used to indicate the similitude of target face posture information and sample human face posture information, and posture apart from smaller, get over by similitude
Greatly.
In an optional realization method, the predefined face key point information includes following two or a variety of:
The ratio of two oculocentric horizontal distances and two spacing;
Nose to left eye distance and nose to right eye distance ratio;
Nose to two centers distance and nose to the distance at the center of two corners of the mouths ratio.
According to the third aspect of the embodiment of the present disclosure, a kind of electronic equipment is provided, including:
Processor;
Memory for storing processor-executable instruction;
Wherein, the processor is configured as:
In current image frame, if two or more be tracked faces meet preset obstruction conditions, obtain not by
The target face posture information of the tracked face blocked;
The target face posture information is compared with the sample human face posture information to prestore in tracker, determine with
The corresponding tracker of the target face posture information;
The tracked face not being blocked is distributed to identified tracker into line trace.
According to the fourth aspect of the embodiment of the present disclosure, a kind of computer readable storage medium is provided, is stored thereon with calculating
Machine program, which is characterized in that the step of any of the above-described the method is realized when the program is executed by processor.
The technical scheme provided by this disclosed embodiment can include the following benefits:
Using the embodiment of the present disclosure, in current image frame, if two or more, which are tracked face, meets default hide
When blend stop part, obtain the target face posture information of tracked face not being blocked, and by target face posture information with
The sample human face posture information to prestore in track device is compared, and determines tracker corresponding with target face posture information, to
The tracked face not being blocked is distributed to identified tracker into line trace, realizes that the tracked face not being blocked can
Correct tracker is assigned into line trace.
It should be understood that above general description and following detailed description is only exemplary and explanatory, not
The disclosure can be limited.
Description of the drawings
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure
Example, and together with specification for explaining the principles of this disclosure.
Fig. 1 is a kind of application scenarios schematic diagram of face tracking method of the disclosure shown according to an exemplary embodiment.
Fig. 2 is a kind of flow chart of face tracking method of the disclosure shown according to an exemplary embodiment.
Fig. 3 is the flow chart of another face tracking method of the disclosure shown according to an exemplary embodiment.
Fig. 4 is a kind of block diagram of face tracking device of the disclosure shown according to an exemplary embodiment.
Fig. 5 is the block diagram of another face tracking device of the disclosure shown according to an exemplary embodiment.
Fig. 6 is a kind of block diagram of device for face tracking shown according to an exemplary embodiment.
Specific implementation mode
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended
The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
It is the purpose only merely for description specific embodiment in the term that the disclosure uses, is not intended to be limiting the disclosure.
The "an" of singulative used in disclosure and the accompanying claims book, " described " and "the" are also intended to including majority
Form, unless context clearly shows that other meanings.It is also understood that term "and/or" used herein refers to and wraps
Containing one or more associated list items purposes, any or all may be combined.
It will be appreciated that though various information, but this may be described using term first, second, third, etc. in the disclosure
A little information should not necessarily be limited by these terms.These terms are only used for same type of information being distinguished from each other out.For example, not departing from
In the case of disclosure range, the first information can also be referred to as the second information, and similarly, the second information can also be referred to as
One information.Depending on context, word as used in this " if " can be construed to " ... when " or " when ...
When ".
During track human faces, it is possible that the case where two tracked faces block, and the mutual screening of face
Gear can cause tracking mistake, be influenced on the accuracy of tracking very big.Therefore, in this case, how to realize correctly track it is non-
It is often important.
The face tracking method that the embodiment of the present disclosure provides can be applied to the scene of at least two tracked face overlappings
In.Wherein it is possible to be that one or more tracked faces block other one or more tracked faces.With two track human faces
It is illustrated for overlapping.As shown in Figure 1, Fig. 1 be a kind of face of the disclosure shown according to an exemplary embodiment with
The application scenarios schematic diagram of track method.When two tracked faces are overlapped during the motion, that is, one tracked
In the scene that face is sheltered from by another tracked face.As shown in Figure 1, it is initial to be tracked face A and tracked face B
It is spaced a distance, later, overlapping or approximate overlapping occurs in the running orbit of two tracked faces, causes to be tracked
Face A shelters from tracked face B.It is tracked face A to correspond to there are one tracker, be tracked there are one face B also corresponds to
Tracker.At this point, since two tracked faces are overlapped, track algorithm can solve to be tracked face easily by surrounding at present
The problem of similar coloration object interference, and when two very similar in objects (face) block when, cannot achieve correctly with
Track, after two tracked faces separate, it is possible that accidentally with the case where, that is, it is possible that initial tracking is tracked
The tracker of face A starts to track tracked face B, and it reduce the accuracys of face tracking.
In consideration of it, the embodiment of the present disclosure provides a kind of face tracking method, in current image frame, if two or two with
When upper tracked face meets default obstruction conditions, the target face posture information for the tracked face not being blocked is obtained, and
Target face posture information is compared with the sample human face posture information to prestore in tracker, is determined and target human face posture
The corresponding tracker of information is realized to distribute the tracked face not being blocked to identified tracker into line trace
The tracked face not being blocked can be assigned to correct tracker into line trace.Below in conjunction with attached drawing to the embodiment of the present disclosure into
Row illustrates.
As shown in Fig. 2, Fig. 2 is a kind of flow of face tracking method of the disclosure shown according to an exemplary embodiment
Figure, includes the following steps:
In step 201, in current image frame, if two or more, which are tracked face, meets default obstruction conditions
When, obtain the target face posture information for the tracked face not being blocked;
In step 202, by the sample human face posture information to prestore in the target face posture information and tracker into
Row compares, and determines tracker corresponding with the target face posture information;
In step 203, the tracked face not being blocked is distributed to identified tracker into line trace.
The method of the embodiment of the present disclosure can be executed by the electronic equipment with computing capability, for example, electronic equipment can be with
It is smart mobile phone, tablet computer, PDA (Personal Digital Assistant, personal digital assistant), computer, video
Monitoring server etc..
The embodiment of the present disclosure utilizes tracker to face into line trace, and tracker (T) may include needed for face tracking
Posture information (P) two parts of filter (F) and face, T=(F, P).It helps to remove people using the posture information of face
Tracking mistake when face mutually blocks.
In an optional realization method, filter can be correlation filter, and correlation filter is now based on real
Target tracking algorism (Minimum Output Sum of Squared Error, MOSSE) on the basis of be improved, obtain
Obtain track algorithm newly.MOSSE algorithms are illustrated below:
In initial frame, i.e., in first frame, sample data (F) is generated according to the human face region of image, according in face
Heart position generates response diagram (G);Correlation filter (H) is generated in conjunction with sample data and response diagram, can be specifically:H1=G1/
F1。
Wherein, subscript 1 indicates first frame.In this step, can be according to manual identified as a result, generate sample number
According to.Sample data indicates the image in the region where face.Response diagram can be determined in conjunction with sample data and Gaussian function.It rings
Value that should be in figure indicates that the corresponding position of the value is the size of the possibility of face.
In the second frame image, the sample data of present frame is generated according to the sample data in initial frame, to first frame
The sample data of correlation filter and the second frame carries out convolution, obtains the response diagram of the second frame.According to the sample in first frame
When notebook data generates the sample data of the second frame, can expand the position of the sample data in first frame, determine a position
Range is set, using the image in the image of the second frame in the position range as sample data.It is found in response diagram maximum
The corresponding position of response, the position are position of the center of face in the second frame image.According to the sample of the second frame
The response diagram of notebook data and the second frame updates correlation filter:H2=G2/F2.Wherein, 2 the second frame, updated phase are indicated
Closing filter can be combined with the filter of first frame, form the updated filter of the second frame, which is used for third
The determination of response diagram in the image of frame.
In the i-th frame, the sample data of i-th frame is determined according to the recognition result of the (i-1)-th frame, to the sample data and the
The updated filter of i-1 frames carries out convolution, obtains the response diagram of the i-th frame, maximum response pair is found in response diagram
The position answered, the position are position of the center of face in the i-th frame image, according to the sample data of the i-th frame and
The response diagram of i-th frame updates correlation filter:Hi=Gi/Fi。
Wherein, i indicates that the serial number of frame, updated correlation filter can be used for response diagram in the image of i+1 frame
It determines.Can be by (i-1)-th it should be noted that when determining the sample data of the i-th frame according to the recognition result of the (i-1)-th frame
The position of recognition result in frame expands, and determines a position range, and the image in the image of the i-th frame in the position range is made
For the sample data of the i-th frame.
The embodiment of the present disclosure application scene can be in the picture exist two or more be tracked face (assuming that
Including at least tracked face A and tracked face B), each tracked face is corresponding with corresponding tracker, be respectively used to
Track is tracked face.In current image frame, if two or more tracked faces meet default obstruction conditions, obtain
The target face posture information for the tracked face not being blocked.
Wherein, default obstruction conditions are preset for triggering the condition for executing face posture information and obtaining.Meet
There are mutual hiding relations for the tracked face of default obstruction conditions.
In one embodiment, default obstruction conditions can be at least two tracked face overlappings.Wherein, overlapping can be with
It is completely overlapped, can also be to partly overlap.From the visual angle of camera, the tracked face that can be proximate to camera blocks far
Tracked face from camera.Tracked face can towards or approximately face camera.Whether tracked face there is weight
It folds depending on can whether there is the maximum value that whether there is in response diagram or response diagram and meet the requirements based on tracked face.
For example, can be determined according to the sample data of filter and present frame in the corresponding trackers of tracked face A
First response diagram determines the second sound according to the sample data of filter and present frame in the corresponding trackers of tracked face B
Ying Tu, when either the second response diagram is not present alternatively, there is no full in the first response diagram or the second response diagram the first response diagram
When the maximum value required enough, indicate that being tracked face A or tracked face B is overlapped.It should be noted that in determination
The sample data for the present frame used when the first response diagram, it is past with the sample data for the present frame used when determining the second response diagram
Toward difference, this is because the sample data of present frame is determined according to the recognition result of previous frame, and the two are tracked people
Face is generally different in the recognition result of previous frame.
Wherein, the tracked face not being blocked can be in response to determine in face or response diagram existing for figure
Go out the face of maximum response.
In this embodiment, at least two tracked face overlappings, the mesh for the tracked face not being blocked is obtained
Human face posture information is marked, can be distributed the tracked face not being blocked to correct tracking based on human face posture information realization
Device improves tracking accuracy.
In another embodiment, default obstruction conditions can be that the distance between at least two tracked faces is less than or waits
In pre-determined distance threshold value.Wherein, pre-determined distance threshold value can be that characterization is tracked object overlapping or the distance threshold close to overlapping.
If as it can be seen that pre-determined distance threshold value to be set as characterizing the distance threshold of tracked object overlapping, may be implemented by with
The judgement of track face overlapping.If setting characterizing pre-determined distance threshold value to tracked object close to the distance threshold of overlapping, both may be used
To realize when tracked object is overlapped, the target face posture information for the tracked face not being blocked is obtained, face is utilized
Posture assists to carry out face tracking;Can also realize will be overlapped in two tracked objects or just from overlapping detach when,
The target face posture information for obtaining the tracked face not being blocked assists to carry out face tracking using human face posture.
The tracked face for meeting default obstruction conditions may include the tracked face being blocked and the quilt not being blocked
Track human faces.For ease of description, the tracked face not being blocked is properly termed as the first kind and is tracked face, the quilt being blocked
Track human faces are properly termed as the second class and are tracked face.
Due to the application scenarios of the embodiment of the present disclosure, can be one or more tracked faces block it is one or more its
He is tracked face.Therefore, the number for the tracked face not being blocked is at least 1, when the tracked face not being blocked
Number be it is multiple when, can be distinguished by human face posture information.The number for the tracked face being blocked can be 1
Or it is multiple.When pre-determined distance threshold value is set as characterizing distance threshold of the tracked object close to overlapping, some period is blocked
The number of tracked face be also possible to be 0.
Each tracked face is correspondingly provided with corresponding tracker, in one embodiment, if the tracked people being blocked
The quantity of face is for the moment, the tracked face being blocked to be distributed to unassigned tracker into line trace and is handled.It is found that by
Its tracker is had determined that in the tracked face not being blocked, and the quantity for the tracked face being blocked is one, then it can be straight
Connect the tracker being determined as a remaining tracker for tracking the tracked face being blocked.
For including two tracked faces, from two trackers, it is not blocked based on the determination of human face posture information
Tracked face tracker, then by another be tracked face distribute to another tracker into line trace handle.
As it can be seen that when the quantity for the tracked face being blocked is that for the moment, can quickly determine the tracked face being blocked
The tracked face being blocked is distributed to unassigned tracker into line trace and is handled by the tracker belonged to, realization.
It is understood that the tracked face being blocked quantity be it is multiple when, can with the update of picture frame,
When the tracked face being blocked evolves as the tracked face not being blocked, determined not based on its human face posture information
The tracker for the tracked face being blocked.
About human face posture information, human face posture information can be the information for embodying human face posture, and sample face
There are correspondences with tracker for posture information, therefore, the sample that can will be prestored in target face posture information and tracker
Human face posture information is compared, and determines corresponding with target face posture information sample human face posture information, so obtain and
The corresponding tracker of target face posture information.
Wherein, sample human face posture information, the sample human face posture to prestore in Pre-tracking device are previously stored in tracker
Information can be the human face posture information for the tracked face that the tracker is tracked.Therefore, by believing target human face posture
Breath is compared with the sample human face posture information to prestore in tracker, it may be determined that the quilt corresponding to target face posture information
The tracker that track human faces are belonged to, and then utilize correct tracker track human faces.
In one example, the sample human face posture information in tracker can initially track corresponding tracked face
When obtain.
In another example, the face appearance for being tracked face can be obtained when tracked face is likely to occur overlapping
State information is stored in as sample human face posture information in corresponding tracker, and realization just executes obtaining step when needed,
The storage resource and computing resource of equipment can be saved.Specifically, in current image frame, if between two tracked faces away from
When from less than specific range threshold value, the human face posture information for being tracked face is obtained respectively, according to tracked face and tracker
Correspondence, using human face posture information as sample human face posture information storage in corresponding tracker.
For example, for meeting the tracked face for presetting obstruction conditions including being tracked face A and tracked face B,
Assuming that being tracked face A corresponds to tracker A, it is tracked face B and corresponds to tracker B, then in the face for getting tracked face A
After posture information, as the face appearance in sample human face posture information storage to tracker A, getting tracked face B
After state information, as in sample human face posture information storage to tracker B.
In an optional realization method, the sample face to prestore in target face posture information and each tracker is determined
The similitude of posture information;It will the corresponding tracking of similar with the target face posture information highest sample human face posture information
Device is determined as tracker corresponding with the target face posture information.
As it can be seen that similitude of the embodiment by human face posture information, can quickly determine and target face posture information
Similar sample human face posture information, and then obtain and be tracked the tracker that face is belonged to.
When face blocks, two face trackers may trace into the same face location, at this moment need to tracing into
Object carry out positioning feature point, obtain posture information (Pt), have the characteristics that using human face posture information successional, pass through Pt
With the comparison of the posture information in two trackers, this is tracked face classification to correct tracker.
In one example, human face posture information may include the angle spun upside down for being tracked face, left and right overturning
Angle and interior rotation angle;The similitude is based on the sample people to prestore in target face posture information and each tracker
The manhatton distance of face posture information determines.
Wherein, human face posture information may include the angle (pitch) spun upside down for being tracked face, left and right overturning
Angle (yaw) and the angle (roll) of interior rotation.Correspondingly, target face posture information may include be not blocked by with
The angle spun upside down, the angle of left and right overturning and the angle of interior rotation of track face.Sample human face posture information can wrap
Include the angle spun upside down, the angle of left and right overturning and the angle of interior rotation that tracker it is expected the tracked face of tracking.
In one example, POSIT (Pose from can be utilized according to the position of the characteristic point of face
Orthography and Scaling with Iterations) algorithm, estimate the posture information of face, obtains angle vector
(pitch, yaw, roll).Due to based on spun upside down in target face posture information angle, left and right overturning angle and
The angle of interior rotation can obtain target angle vector, based on the angle spun upside down in sample human face posture information, control and turn over
The angle of the angle and interior rotation that turn can obtain sample angle vector.Meanwhile different sample human face posture information can obtain
Obtain different sample angles vectors.It is consequently possible to calculate the manhatton distance of target angle vector and each sample angle vector, by it
As posture distance, the similitude of posture distance instruction target face posture information and sample human face posture information can be passed through.
Wherein, for posture apart from smaller, similitude is bigger, you can by the corresponding tracking of sample human face posture information that posture distance is minimum
Device is determined as the tracker of tracked face not being blocked.
In another optional realization method, human face posture information includes being tracked the angle information of face and for table
The predefined face key point information of human face posture is levied, the angle information includes the angle spun upside down, the angle of left and right overturning
It spends, the angle of interior rotation;
The similitude of the sample human face posture information to prestore in the determining target face posture information and each tracker, packet
It includes:
Determine the manhatton distance of target angle vector and sample angle vector, the target angle vector is by the target
Angle information is constituted in human face posture information, and the sample angle vector is believed by angle in sample human face posture information in tracker
Breath is constituted;
Determine the included angle cosine distance of target critical point vector and sample key point vector, the target critical point vector by
It predefines face key point information in the target face posture information to constitute, the sample key point vector is by sample in tracker
Predefined face key point information in this face posture information is constituted;
The manhatton distance and the included angle cosine distance are weighted summation, obtain posture distance, the posture
Distance is used to indicate the similitude of target face posture information and sample human face posture information, and posture apart from smaller, get over by similitude
Greatly.
Wherein, human face posture information includes the angle information for being tracked face and the predefined people for characterizing human face posture
Face key point information.Correspondingly, target face posture information may include the angle information for the tracked face not being blocked (such as
Angle, the angle of left and right overturning and the angle of interior rotation spun upside down) and for characterizing the face for being tracked face
The predefined face key point information of posture.Sample human face posture information may include the tracked face that tracker it is expected tracking
Angle information (angle of the angle that such as spins upside down, the angle of left and right overturning and interior rotation) and for characterizing the quilt
The predefined face key point information of the human face posture of track human faces.
In order to extract the posture information of face, positioning feature point algorithm, such as ESR (Explicit Shape may be used
Regression) algorithm obtains the position of the characteristic point of face in the picture, and the appearance of the feature point extraction face based on face
State information.Predefined face key point information can be affected by human face posture and by key for characterizing human face posture
The information that point is constituted.In an achievable mode, the predefined face key point information includes following two or a variety of:
The ratio of the horizontal distance and two spacing of two eye centers;
Nose to left eye distance and nose to right eye distance ratio;
Nose to the center of two eyes distance and nose to the distance at the center of two corners of the mouths ratio.
It is understood that predefined face key point information can also be the information of other characterization human face postures, especially
It is the ratio information obtained by key point position, can significantly reacts the variation of human face posture, and then improve the standard of tracker
True property.Face key point information is predefined for other, it is numerous to list herein.
Specifically, posture information P includes following information:
P=(P2D,P3D)
The posture information of face:Key point vector P2DIncluding (r1, r2, r3)
r1:The ratio of the horizontal distance and two spacing of two eye centers.
r2:Nose to left eye distance and nose to right eye distance ratio.
r3:Nose to the center of two eyes distance and nose to the distance at the center of two corners of the mouths ratio.
The posture information of face:Angle vector P3DIncluding (pitch, yaw, roll)
Wherein, according to the position of the characteristic point of face, using POSIT algorithms, estimate face posture information (pitch,
Yaw, roll).
pitch:Represent the angle spun upside down.
yaw:Represent the angle of left and right overturning.
yoll:Represent the angle of plane internal rotation.
Correspondingly, by target face posture information Pt and prestore this face of various kinds posture information (assuming that including two with
Track device, and then include two groups of sample human face posture information Px, Py) posture distance is calculated separately, formula is as follows:
Dp=w*D2D+(1-w)*D3D
Wherein, D2DIndicate P in target face posture information and sample human face posture information2DCorresponding included angle cosine distance,
D3DIndicate P in target face posture information and sample human face posture information3DCorresponding manhatton distance (Manhattan
Distance).Thus, it is possible to obtain the posture distance of target face posture information and every group of sample human face posture information, and will
Face gives the smaller tracker of posture distance.
As seen from the above-described embodiment, face appearance is characterized by the predefined face key point information influenced by human face posture
State can make the posture distance that calculating obtains that can more embody the similitude of posture information, and then obtain accurate tracker.
After determining corresponding with target face posture information tracker, can by it is described be not blocked it is tracked
Face is distributed to identified tracker into line trace.
It is distributed to identified tracker by the tracked face not being blocked, the filter of the tracker can be updated
Wave device is to realize the tracking of face in next frame image.Filter in the corresponding tracker of tracked face being blocked can be with
It does not update.Further include filter in tracker, this method further includes:
The location determination for the tracked face not being blocked described in previous frame image according to the current image frame is worked as
The band of position image for the tracked face not being blocked described in preceding picture frame;
By tracking corresponding with the tracked face not being blocked in the band of position image and previous frame image
Filter in device carries out convolution, obtains the response diagram for the tracked face not being blocked described in the current image frame;
It is not hidden described in the current image frame by the corresponding location determination of maximum response in the response diagram
The position of the tracked face of gear;
According to the response diagram and the band of position image, determine in current image frame with it is described be not blocked by with
Filter in the corresponding tracker of track face.
Wherein, the filter in the current image frame is used to determine that is be not blocked described in next frame image to be tracked
The response diagram of face.It should be noted that in previous frame image in tracker corresponding with the tracked face not being blocked
Filter refers to the updated filter in the previous frame image.
Further, it is divided into out in two tracked face overlappings, and the distance between two tracked faces are less than default
When distance threshold, continue to determine which tracker tracked face belongs to.For example, being divided into two tracked face overlappings
It opens, and when the distance between two tracked faces are less than pre-determined distance threshold value, obtains the target of tracked face not being blocked
Human face posture information;The target face posture information is compared with the sample human face posture information to prestore in tracker,
Determine tracker corresponding with the target face posture information;The tracked face not being blocked is distributed to determining
Tracker into line trace.
The face tracking method that the embodiment of the present disclosure provides is tracked face at two or more and meets default block
When condition, by human face posture information assists Face tracking algorithm, mutually being blocked to avoid face causes to track mistake, improves
The accuracy of tracking.
Various technical characteristics in embodiment of above can be arbitrarily combined, as long as the combination between feature is not present
Conflict or contradiction, but as space is limited, it is not described one by one, therefore the various technical characteristics in the above embodiment is arbitrary
It is combined the range for also belonging to this disclosure.
It is illustrated below with one of combination.
As shown in figure 3, Fig. 3 is the flow of another face tracking method of the disclosure shown according to an exemplary embodiment
Figure, the method includes:
In step 301, in current image frame, if two or more, which are tracked face, meets default obstruction conditions
When, obtain the target face posture information for the tracked face not being blocked;
In step 302, the sample human face posture information to prestore in target face posture information and each tracker is determined
Similitude, and will the corresponding tracker of similar with the target face posture information highest sample human face posture information, determine
For tracker corresponding with the target face posture information.
In step 303, the tracked face not being blocked is distributed to identified tracker into line trace.
In step 304, if the quantity for the tracked face being blocked is for the moment, the tracked face being blocked is distributed
It is handled to unassigned tracker into line trace.
Wherein, identical as the relevant technologies in Fig. 2 in Fig. 3, it does not repeat one by one herein.
As seen from the above-described embodiment, by when two or more are tracked face and meet default obstruction conditions, leading to
It crosses face posture information and assists Face tracking algorithm, it may be determined that the tracker that the tracked face not being blocked is belonged to, when
The quantity for the tracked face being blocked is that for the moment, can quickly determine the tracker that the tracked face being blocked is belonged to,
It realizes to distribute the tracked face being blocked to unassigned tracker into line trace and handle, improve the accurate of tracking
Property.
Corresponding with the embodiment of aforementioned face tracking method, the disclosure additionally provides face tracking device, device is answered
The embodiment of equipment and storage medium.
As shown in figure 4, Fig. 4 is a kind of block diagram of face tracking device of the disclosure shown according to an exemplary embodiment,
Described device includes:
Data obtaining module 410, is configured as in current image frame, meets if two or more are tracked face
When default obstruction conditions, the target face posture information for the tracked face not being blocked is obtained;
Tracker determining module 420 is configured as the sample that will be prestored in the target face posture information and tracker
Human face posture information is compared, and determines tracker corresponding with the target face posture information;
Face tracking module 430 is configured as distributing the tracked face not being blocked to identified tracking
Device is into line trace.
In an optional realization method, the default obstruction conditions include:
At least two tracked face overlappings;Or,
Distance between at least two tracked faces is less than or equal to pre-determined distance threshold value.
In an optional realization method, the tracked face for meeting default obstruction conditions includes:Be blocked by with
Track face and the tracked face not being blocked are each tracked face and are correspondingly provided with corresponding tracker, the face tracking
Module 430 is additionally configured to:
If the quantity for the tracked face being blocked is for the moment, the tracked face being blocked is distributed to unassigned
Tracker is handled into line trace.
As shown in figure 5, Fig. 5 is the frame of another face tracking device of the disclosure shown according to an exemplary embodiment
Figure, on the basis of aforementioned embodiment illustrated in fig. 4, the tracker determining module 420 includes the embodiment:
Similitude determination sub-module 421 is configured to determine that the sample to prestore in target face posture information and each tracker
The similitude of this face posture information;
Tracker determination sub-module 422 is configured as highest sample people similar to the target face posture information
The corresponding tracker of face posture information is determined as tracker corresponding with the target face posture information.
In an optional realization method, human face posture information includes the angle spun upside down for being tracked face, a left side
The angle of right overturning and the angle of interior rotation;The similitude is based on prestoring in target face posture information and each tracker
The manhatton distance of sample human face posture information determines.
In an optional realization method, human face posture information includes being tracked the angle information of face and for characterizing
The predefined face key point information of human face posture, the angle information include the angle spun upside down, left and right overturning angle,
The angle of interior rotation;
The similitude determination sub-module 421, is specifically configured to:
Determine the manhatton distance of target angle vector and sample angle vector, the target angle vector is by the target
Angle information is constituted in human face posture information, and the sample angle vector is believed by angle in sample human face posture information in tracker
Breath is constituted;
Determine the included angle cosine distance of target critical point vector and sample key point vector, the target critical point vector by
It predefines face key point information in the target face posture information to constitute, the sample key point vector is by sample in tracker
Predefined face key point information in this face posture information is constituted;
The manhatton distance and the included angle cosine distance are weighted summation, obtain posture distance, the posture
Distance is used to indicate the similitude of target face posture information and sample human face posture information, and posture apart from smaller, get over by similitude
Greatly.
In an optional realization method, the predefined face key point information includes following two or a variety of:
The ratio of two oculocentric horizontal distances and two spacing;
Nose to left eye distance and nose to right eye distance ratio;
Nose to two centers distance and nose to the distance at the center of two corners of the mouths ratio.
Correspondingly, the disclosure also provides a kind of electronic equipment, the equipment includes processor;It can for storing processor
The memory executed instruction;Wherein, the processor is configured as:
In current image frame, if two or more be tracked faces meet preset obstruction conditions, obtain not by
The target face posture information of the tracked face blocked;
The target face posture information is compared with the sample human face posture information to prestore in tracker, determine with
The corresponding tracker of the target face posture information;
The tracked face not being blocked is distributed to identified tracker into line trace.
Correspondingly, the disclosure also provides a kind of computer readable storage medium, it is stored thereon with computer program, the program
The step of any of the above-described the method is realized when being executed by processor.
It (includes but not limited to disk that the disclosure, which can be used in the storage medium that one or more wherein includes program code,
Memory, CD-ROM, optical memory etc.) on the form of computer program product implemented.Computer-usable storage medium packet
Permanent and non-permanent, removable and non-removable media is included, information storage is can be accomplished by any method or technique.Letter
Breath can be computer-readable instruction, data structure, the module of program or other data.The example packet of the storage medium of computer
It includes but is not limited to:Phase transition internal memory (PRAM), static RAM (SRAM), dynamic random access memory (DRAM),
Other kinds of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory
(EEPROM), fast flash memory bank or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM), digital versatile disc
(DVD) or other optical storages, magnetic tape cassette, tape magnetic disk storage or other magnetic storage apparatus or any other non-biography
Defeated medium can be used for storage and can be accessed by a computing device information.
The specific details of the realization process of the function of modules and effect, which are shown in, in above-mentioned apparatus corresponds to step in the above method
Realization process, details are not described herein.
For device embodiments, since it corresponds essentially to embodiment of the method, so related place is referring to method reality
Apply the part explanation of example.The apparatus embodiments described above are merely exemplary, wherein described be used as separating component
The module of explanation may or may not be physically separated, and the component shown as module can be or can also
It is not physical module, you can be located at a place, or may be distributed on multiple network modules.It can be according to actual
It needs that some or all of module therein is selected to realize the purpose of disclosure scheme.Those of ordinary skill in the art are not paying
In the case of going out creative work, you can to understand and implement.
As shown in fig. 6, Fig. 6 is a kind of block diagram of device for face tracking shown according to an exemplary embodiment.
For example, device 600 may be provided as computer equipment.With reference to Fig. 6, device 600 includes processing component 622,
Further comprise one or more processors, and by the memory resource representated by memory 632, it can be by handling for storing
The instruction of the execution of component 622, such as application program.The application program stored in memory 632 may include one or one
Each above corresponds to the module of one group of instruction.In addition, processing component 622 is configured as executing instruction, it is above-mentioned to execute
Face tracking method.
Device 600 can also include the power management that a power supply module 626 is configured as executive device 600, and one has
Line or radio network interface 650 are configured as device 600 being connected to network and input and output (I/O) interface 658.Dress
Setting 600 can operate based on the operating system for being stored in memory 632.
Wherein, when the instruction in the memory 632 is executed by the processing component 622 so that device 600 can be held
A kind of face tracking method of row, including:
In current image frame, if two or more be tracked faces meet preset obstruction conditions, obtain not by
The target face posture information of the tracked face blocked;
The target face posture information is compared with the sample human face posture information to prestore in tracker, determine with
The corresponding tracker of the target face posture information;
The tracked face not being blocked is distributed to identified tracker into line trace.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure
Its embodiment.The disclosure is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or
Person's adaptive change follows the general principles of this disclosure and includes the undocumented common knowledge in the art of the disclosure
Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following
Claim is pointed out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the accompanying claims.
The foregoing is merely the preferred embodiments of the disclosure, not limiting the disclosure, all essences in the disclosure
With within principle, any modification, equivalent substitution, improvement and etc. done should be included within the scope of the disclosure protection god.
Claims (16)
1. a kind of face tracking method, which is characterized in that the method includes:
In current image frame, if two or more tracked faces meet default obstruction conditions, acquisition is not blocked
Tracked face target face posture information;
The target face posture information is compared with the sample human face posture information to prestore in tracker, determine with it is described
The corresponding tracker of target face posture information;
The tracked face not being blocked is distributed to identified tracker into line trace.
2. according to the method described in claim 1, it is characterized in that, the default obstruction conditions include:
At least two tracked face overlappings;Or,
Distance between at least two tracked faces is less than or equal to pre-determined distance threshold value.
3. according to the method described in claim 1, it is characterized in that, the tracked face for meeting default obstruction conditions includes:Quilt
The tracked face blocked and the tracked face not being blocked are each tracked face and are correspondingly provided with corresponding tracker, institute
The method of stating further includes:
If the quantity for the tracked face being blocked is for the moment, the tracked face being blocked is distributed to unassigned tracking
Device is handled into line trace.
4. method according to any one of claims 1 to 3, which is characterized in that described by the target face posture information
It is compared with the sample human face posture information to prestore in tracker, determines tracking corresponding with the target face posture information
Device, including:
Determine the similitude of the sample human face posture information to prestore in target face posture information and each tracker;
Will the corresponding tracker of similar with the target face posture information highest sample human face posture information, be determined as and institute
State the corresponding tracker of target face posture information.
5. according to the method described in claim 4, it is characterized in that, human face posture information includes being tracked spinning upside down for face
Angle, left and right overturning angle and interior rotation angle;The similitude is based on target face posture information and each tracking
The manhatton distance of the sample human face posture information to prestore in device determines.
6. according to the method described in claim 4, it is characterized in that, human face posture information includes being tracked the angle information of face
With the predefined face key point information for characterizing human face posture, the angle information includes the angle spun upside down, left and right
The angle of overturning, the angle of interior rotation;
The similitude of the sample human face posture information to prestore in the determining target face posture information and each tracker, including:
Determine the manhatton distance of target angle vector and sample angle vector, the target angle vector is by the target face
Angle information is constituted in posture information, and the sample angle vector is by angle information structure in sample human face posture information in tracker
At;
Determine the included angle cosine distance of target critical point vector and sample key point vector, the target critical point vector is by described
It predefines face key point information in target face posture information to constitute, the sample key point vector is by sample people in tracker
Predefined face key point information in face posture information is constituted;
The manhatton distance and the included angle cosine distance are weighted summation, obtain posture distance, the posture distance
It is used to indicate the similitude of target face posture information and sample human face posture information, for posture apart from smaller, similitude is bigger.
7. according to the method described in claim 6, it is characterized in that, the predefined face key point information includes following two
Or it is a variety of:
The ratio of two oculocentric horizontal distances and two spacing;
Nose to left eye distance and nose to right eye distance ratio;
Nose to two centers distance and nose to the distance at the center of two corners of the mouths ratio.
8. a kind of face tracking device, which is characterized in that described device includes:
Data obtaining module is configured as in current image frame, if two or more, which are tracked face, meets default hide
When blend stop part, the target face posture information for the tracked face not being blocked is obtained;
Tracker determining module is configured as the sample human face posture that will be prestored in the target face posture information and tracker
Information is compared, and determines tracker corresponding with the target face posture information;
Face tracking module, be configured as by the tracked face not being blocked distribute to identified tracker carry out with
Track.
9. device according to claim 8, which is characterized in that the default obstruction conditions include:
At least two tracked face overlappings;Or,
Distance between at least two tracked faces is less than or equal to pre-determined distance threshold value.
10. device according to claim 8, which is characterized in that meet preset obstruction conditions tracked face include:Quilt
The tracked face blocked and the tracked face not being blocked are each tracked face and are correspondingly provided with corresponding tracker, institute
Face tracking module is stated to be additionally configured to:
If the quantity for the tracked face being blocked is for the moment, the tracked face being blocked is distributed to unassigned tracking
Device is handled into line trace.
11. according to claim 8 to 10 any one of them device, which is characterized in that the tracker determining module includes:
Similitude determination sub-module is configured to determine that the sample face appearance to prestore in target face posture information and each tracker
The similitude of state information;
Tracker determination sub-module is configured as believing highest sample human face posture similar to the target face posture information
Corresponding tracker is ceased, tracker corresponding with the target face posture information is determined as.
12. according to the devices described in claim 11, which is characterized in that human face posture information includes being tracked the upper and lower of face to turn over
Angle, the angle of left and right overturning and the angle of interior rotation turned;The similitude be based on target face posture information with respectively with
The manhatton distance of the sample human face posture information to prestore in track device determines.
13. according to the devices described in claim 11, which is characterized in that human face posture information includes being tracked the angle letter of face
Breath and the predefined face key point information for characterizing human face posture, the angle information include the angle spun upside down, a left side
The angle of right overturning, the angle of interior rotation;
The similitude determination sub-module, is specifically configured to:
Determine the manhatton distance of target angle vector and sample angle vector, the target angle vector is by the target face
Angle information is constituted in posture information, and the sample angle vector is by angle information structure in sample human face posture information in tracker
At;
Determine the included angle cosine distance of target critical point vector and sample key point vector, the target critical point vector is by described
It predefines face key point information in target face posture information to constitute, the sample key point vector is by sample people in tracker
Predefined face key point information in face posture information is constituted;
The manhatton distance and the included angle cosine distance are weighted summation, obtain posture distance, the posture distance
It is used to indicate the similitude of target face posture information and sample human face posture information, for posture apart from smaller, similitude is bigger.
14. device according to claim 13, which is characterized in that the predefined face key point information includes following two
Kind is a variety of:
The ratio of two oculocentric horizontal distances and two spacing;
Nose to left eye distance and nose to right eye distance ratio;
Nose to two centers distance and nose to the distance at the center of two corners of the mouths ratio.
15. a kind of electronic equipment, which is characterized in that including:
Processor;
Memory for storing processor-executable instruction;
Wherein, the processor is configured as:
In current image frame, if two or more tracked faces meet default obstruction conditions, acquisition is not blocked
Tracked face target face posture information;
The target face posture information is compared with the sample human face posture information to prestore in tracker, determine with it is described
The corresponding tracker of target face posture information;
The tracked face not being blocked is distributed to identified tracker into line trace.
16. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
The step of any one of claim 1 to 7 the method is realized when execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810283645.9A CN108629283B (en) | 2018-04-02 | 2018-04-02 | Face tracking method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810283645.9A CN108629283B (en) | 2018-04-02 | 2018-04-02 | Face tracking method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108629283A true CN108629283A (en) | 2018-10-09 |
CN108629283B CN108629283B (en) | 2022-04-08 |
Family
ID=63696631
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810283645.9A Active CN108629283B (en) | 2018-04-02 | 2018-04-02 | Face tracking method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108629283B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110046548A (en) * | 2019-03-08 | 2019-07-23 | 深圳神目信息技术有限公司 | Tracking, device, computer equipment and the readable storage medium storing program for executing of face |
CN110705478A (en) * | 2019-09-30 | 2020-01-17 | 腾讯科技(深圳)有限公司 | Face tracking method, device, equipment and storage medium |
CN111291655A (en) * | 2020-01-21 | 2020-06-16 | 杭州微洱网络科技有限公司 | Head pose matching method for 2d image measured in E-commerce image |
CN112131915A (en) * | 2019-06-25 | 2020-12-25 | 杭州海康威视数字技术股份有限公司 | Face attendance system, camera and code stream equipment |
CN112330714A (en) * | 2020-09-29 | 2021-02-05 | 深圳大学 | Pedestrian tracking method and device, electronic equipment and storage medium |
CN112417198A (en) * | 2020-12-07 | 2021-02-26 | 武汉柏禾智科技有限公司 | Face image retrieval method |
CN113031464A (en) * | 2021-03-22 | 2021-06-25 | 北京市商汤科技开发有限公司 | Device control method, device, electronic device and storage medium |
CN113642368A (en) * | 2020-05-11 | 2021-11-12 | 杭州海康威视数字技术股份有限公司 | Method, device and equipment for determining human face posture and storage medium |
WO2023088074A1 (en) * | 2021-11-18 | 2023-05-25 | 北京眼神智能科技有限公司 | Face tracking method and apparatus, and storage medium and device |
CN116820251A (en) * | 2023-08-28 | 2023-09-29 | 中数元宇数字科技(上海)有限公司 | Gesture track interaction method, intelligent glasses and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102496009A (en) * | 2011-12-09 | 2012-06-13 | 北京汉邦高科数字技术股份有限公司 | Multi-face tracking method for intelligent bank video monitoring |
CN102637251A (en) * | 2012-03-20 | 2012-08-15 | 华中科技大学 | Face recognition method based on reference features |
CN105913028A (en) * | 2016-04-13 | 2016-08-31 | 华南师范大学 | Face tracking method and face tracking device based on face++ platform |
CN106682591A (en) * | 2016-12-08 | 2017-05-17 | 广州视源电子科技股份有限公司 | Face recognition method and device |
CN106778585A (en) * | 2016-12-08 | 2017-05-31 | 腾讯科技(上海)有限公司 | A kind of face key point-tracking method and device |
CN107341460A (en) * | 2017-06-26 | 2017-11-10 | 北京小米移动软件有限公司 | Face tracking method and device |
-
2018
- 2018-04-02 CN CN201810283645.9A patent/CN108629283B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102496009A (en) * | 2011-12-09 | 2012-06-13 | 北京汉邦高科数字技术股份有限公司 | Multi-face tracking method for intelligent bank video monitoring |
CN102637251A (en) * | 2012-03-20 | 2012-08-15 | 华中科技大学 | Face recognition method based on reference features |
CN105913028A (en) * | 2016-04-13 | 2016-08-31 | 华南师范大学 | Face tracking method and face tracking device based on face++ platform |
CN106682591A (en) * | 2016-12-08 | 2017-05-17 | 广州视源电子科技股份有限公司 | Face recognition method and device |
CN106778585A (en) * | 2016-12-08 | 2017-05-31 | 腾讯科技(上海)有限公司 | A kind of face key point-tracking method and device |
CN107341460A (en) * | 2017-06-26 | 2017-11-10 | 北京小米移动软件有限公司 | Face tracking method and device |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110046548A (en) * | 2019-03-08 | 2019-07-23 | 深圳神目信息技术有限公司 | Tracking, device, computer equipment and the readable storage medium storing program for executing of face |
CN112131915A (en) * | 2019-06-25 | 2020-12-25 | 杭州海康威视数字技术股份有限公司 | Face attendance system, camera and code stream equipment |
CN112131915B (en) * | 2019-06-25 | 2023-03-24 | 杭州海康威视数字技术股份有限公司 | Face attendance system, camera and code stream equipment |
CN110705478A (en) * | 2019-09-30 | 2020-01-17 | 腾讯科技(深圳)有限公司 | Face tracking method, device, equipment and storage medium |
CN111291655A (en) * | 2020-01-21 | 2020-06-16 | 杭州微洱网络科技有限公司 | Head pose matching method for 2d image measured in E-commerce image |
CN111291655B (en) * | 2020-01-21 | 2023-06-06 | 杭州微洱网络科技有限公司 | Head posture matching method for measuring 2d image in electronic commerce image |
CN113642368B (en) * | 2020-05-11 | 2023-08-18 | 杭州海康威视数字技术股份有限公司 | Face pose determining method, device, equipment and storage medium |
CN113642368A (en) * | 2020-05-11 | 2021-11-12 | 杭州海康威视数字技术股份有限公司 | Method, device and equipment for determining human face posture and storage medium |
CN112330714A (en) * | 2020-09-29 | 2021-02-05 | 深圳大学 | Pedestrian tracking method and device, electronic equipment and storage medium |
CN112330714B (en) * | 2020-09-29 | 2024-01-09 | 深圳大学 | Pedestrian tracking method and device, electronic equipment and storage medium |
CN112417198A (en) * | 2020-12-07 | 2021-02-26 | 武汉柏禾智科技有限公司 | Face image retrieval method |
CN113031464A (en) * | 2021-03-22 | 2021-06-25 | 北京市商汤科技开发有限公司 | Device control method, device, electronic device and storage medium |
WO2023088074A1 (en) * | 2021-11-18 | 2023-05-25 | 北京眼神智能科技有限公司 | Face tracking method and apparatus, and storage medium and device |
CN116820251A (en) * | 2023-08-28 | 2023-09-29 | 中数元宇数字科技(上海)有限公司 | Gesture track interaction method, intelligent glasses and storage medium |
CN116820251B (en) * | 2023-08-28 | 2023-11-07 | 中数元宇数字科技(上海)有限公司 | Gesture track interaction method, intelligent glasses and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108629283B (en) | 2022-04-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108629283A (en) | Face tracking method, device, equipment and storage medium | |
US11170210B2 (en) | Gesture identification, control, and neural network training methods and apparatuses, and electronic devices | |
US10380788B2 (en) | Fast and precise object alignment and 3D shape reconstruction from a single 2D image | |
US10083343B2 (en) | Method and apparatus for facial recognition | |
US9978119B2 (en) | Method for automatic facial impression transformation, recording medium and device for performing the method | |
US20140185924A1 (en) | Face Alignment by Explicit Shape Regression | |
US9299161B2 (en) | Method and device for head tracking and computer-readable recording medium | |
CN109614910B (en) | Face recognition method and device | |
US20190325564A1 (en) | Image blurring methods and apparatuses, storage media, and electronic devices | |
WO2015017539A1 (en) | Rolling sequential bundle adjustment | |
US20190130536A1 (en) | Image blurring methods and apparatuses, storage media, and electronic devices | |
CN107633208B (en) | Electronic device, the method for face tracking and storage medium | |
CN105868677A (en) | Live human face detection method and device | |
CN110533694A (en) | Image processing method, device, terminal and storage medium | |
US20180197308A1 (en) | Information processing apparatus and method of controlling the same | |
CN106504265B (en) | Estimation optimization method, equipment and system | |
US10791321B2 (en) | Constructing a user's face model using particle filters | |
JP6283124B2 (en) | Image characteristic estimation method and device | |
CN110651274A (en) | Movable platform control method and device and movable platform | |
CN107341460A (en) | Face tracking method and device | |
JP2022526468A (en) | Systems and methods for adaptively constructing a 3D face model based on two or more inputs of a 2D face image | |
WO2024104144A1 (en) | Image synthesis method and apparatus, storage medium, and electrical device | |
Gu et al. | Vtst: Efficient visual tracking with a stereoscopic transformer | |
Tan et al. | A combined generalized and subject-specific 3d head pose estimation | |
KR102695527B1 (en) | Method and apparatus for object tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |