CN110009662A - Method, apparatus, electronic equipment and the computer readable storage medium of face tracking - Google Patents

Method, apparatus, electronic equipment and the computer readable storage medium of face tracking Download PDF

Info

Publication number
CN110009662A
CN110009662A CN201910262510.9A CN201910262510A CN110009662A CN 110009662 A CN110009662 A CN 110009662A CN 201910262510 A CN201910262510 A CN 201910262510A CN 110009662 A CN110009662 A CN 110009662A
Authority
CN
China
Prior art keywords
face
information
detection block
pursuit path
existing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910262510.9A
Other languages
Chinese (zh)
Other versions
CN110009662B (en
Inventor
杨弋
周舒畅
张一山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Beijing Maigewei Technology Co Ltd
Original Assignee
Beijing Maigewei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Maigewei Technology Co Ltd filed Critical Beijing Maigewei Technology Co Ltd
Priority to CN201910262510.9A priority Critical patent/CN110009662B/en
Publication of CN110009662A publication Critical patent/CN110009662A/en
Application granted granted Critical
Publication of CN110009662B publication Critical patent/CN110009662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present application provides method, apparatus, electronic equipment and the computer readable storage medium of a kind of face tracking, is related to technical field of image processing.This method comprises: by handling at least frame frame image in video flowing, obtain the detection block information of at least one face, it is then based on detection block information, determine the corresponding attribute information of at least one face, it is then based on the detection block information and the corresponding attribute information of at least one face of at least one face, at least one face is tracked.The embodiment of the present application realizes reduction when being tracked to multiple faces, and the alternate probability of pursuit path of multiple faces occurs, and promotes the accuracy tracked to face, and then can promote user experience.

Description

Method, apparatus, electronic equipment and the computer readable storage medium of face tracking
Technical field
This application involves technical field of image processing, specifically, this application involves a kind of methods of face tracking, dress It sets, electronic equipment and computer readable storage medium.
Background technique
With the development of information technology, the tracer technique of target object also develops therewith, and the tracer technique of target object is Trajectory track is carried out to target object in each frame image of video flowing.
On many intelligent video camera heads or face snap machine, the inspection of target object is directed to by tracking in every frame image Frame is surveyed, target object is tracked with realizing, such as is tracked based on the detection block for being directed to face on every frame image, to realize people Face is tracked to obtain tracing path, but existing face tracking technology is only applicable under fairly simple scene, for example, every frame figure Under the scene for only having a target face or a face to be tracked as in, if being directed to the scene of certain complexity, for example, as schemed Under scene shown in 1a, two faces are based on existing face tracking technology, what is obtained chases after when true track interlocks in video The tracing path of face alternately wrong track is likely to occur in track track, as shown in Figure 1 b, so as to cause the standard of tracking face Exactness is lower, and then causes user experience poor.
Therefore, how to be more accurately tracked for face as a critical issue.
Summary of the invention
This application provides a kind of method, apparatus of face tracking, electronic equipment and computer readable storage mediums, are used for The technical problem that the accuracy of solution tracking target object is lower and user experience is poor.
In a first aspect, a kind of method of face tracking is provided, this method comprises:
By handling at least frame frame image in video flowing, the detection block information of at least one face is obtained;
Based on detection block information, the corresponding attribute information of at least one face is determined;
Detection block information and the corresponding attribute information of at least one face based at least one face, at least one Face is tracked.
In one possible implementation, detection block information and at least one face pair based at least one face The attribute information answered tracks at least one face, comprising:
According to the detection block information of at least one face and the attribute information of at least one face, with existing tracking rail Mark information is matched;
Based on matching result, existing pursuit path information is updated, tracking processing is carried out at least one face to realize.
In one possible implementation, according to the detection block information of at least one face and at least one face Attribute information is matched with existing pursuit path information;Based on matching result, existing pursuit path information is updated, is wrapped It includes:
According to existing pursuit path information and the detection block information and attribute information of at least one face, calculate similar Spend matrix;
According to similarity matrix, existing pursuit path information is updated.
In one possible implementation, according to similarity matrix, existing pursuit path information is updated, comprising:
Determine the element for being not more than preset threshold in similarity matrix;
Based on the element in similarity matrix no more than preset threshold, and pass through bipartite graph best match algorithm, determining The set of cobordant, match while set in any matching while characterize the detection of any group of matched pursuit path information and face Frame information and attribute information;
According to the set on matching side, existing pursuit path information is updated.
In one possible implementation, existing pursuit path information is updated, is included at least one of the following:
If not including existing pursuit path information in the corresponding face information of any frame frame image in video flowing, The pursuit path information not included in the corresponding face information of any frame frame image is deleted in existing pursuit path information;
If not including the corresponding face information of any frame frame image in video flowing in existing pursuit path information, The corresponding face information of addition any frame frame image in existing pursuit path information;
Face information includes: the detection block information and the corresponding attribute information of face of face.
In one possible implementation, the corresponding attribute information of any face includes at least one of the following:
Age information;Gender information.
In one possible implementation, according to existing pursuit path information and the detection block of at least one face Information and attribute information calculate similarity matrix, comprising:
According to specific formulation, the either element in similarity matrix is calculated;
According to each element in calculated similarity matrix, similarity matrix is determined;
Specific formulation are as follows:
Aij=(Ti1-fj1)2×a+(Ti2-fj2)2×b+(Ti3-fj3)2×c+(Ti4-fj4) × d, wherein Ti1It is existing The corresponding age information of face, f in any pursuit path in pursuit pathj1For the corresponding age information of face detection block;Ti2For The corresponding gender of face is probabilistic information, the f of male or women in any pursuit path in existing pursuit pathj2For face The corresponding gender of detection block is the probabilistic information of male or women;Ti3-fj3Indicate any tracking rail in existing pursuit path The Euclidean distance of center position between mark and Face datection frame;Ti4-fj4For any pursuit path in existing pursuit path With the characteristic distance between face frame.
In one possible implementation, it is based on detection block information, determines the corresponding attribute information of at least one face, Include:
Based on detection block information and by training after network model, export the corresponding attributive character of at least one face to Amount.
Second aspect provides a kind of device of face tracking, which includes:
Processing module, for obtaining at least one face by handling at least frame frame image in video flowing Detection block information;
Determining module determines the corresponding attribute information of at least one face for being based on detection block information;
Tracking module, for detection block information and the corresponding attribute letter of at least one face based at least one face Breath, tracks at least one face.
In one possible implementation, tracking module includes: matching unit and updating unit, wherein
Matching unit, for according to the detection block information of at least one face and the attribute information of at least one face, It is matched with existing pursuit path information;
Updating unit updates existing pursuit path information for the matching result based on matching unit, to realize to extremely A few face carries out tracking processing.
In one possible implementation, matching unit, specifically for according to existing pursuit path information and extremely The detection block information and attribute information of a few face, calculate similarity matrix;
Updating unit is specifically used for updating existing pursuit path information according to similarity matrix.
In one possible implementation, updating unit is specifically used for determining in similarity matrix no more than default threshold The element of value;
Updating unit is specifically also used to based on the element in similarity matrix no more than preset threshold, and passes through bipartite graph Best match algorithm, determine matching side set, match while set in any matching while characterize any group of matched tracking rail The detection block information and attribute information of mark information and face;
Updating unit is specifically also used to update existing pursuit path information according to the set on matching side.
In one possible implementation, updating unit, specifically for being corresponded to when any frame frame image in video flowing Face information in do not include existing pursuit path information when, in existing pursuit path information delete any frame frame image The pursuit path information not included in corresponding face information;And/or
Updating unit, specifically for working as any frame frame image pair not included in video flowing in existing pursuit path information When the face information answered, the corresponding face information of any frame frame image is added in existing pursuit path information;
Face information includes: the detection block information and the corresponding attribute information of face of face.
In one possible implementation, the corresponding attribute information of any face includes at least one of the following:
Age information;Gender information.
In one possible implementation, matching unit is specifically used for this according to specific formulation, calculates similarity matrix In either element;
Matching unit is specifically also used to determine similarity matrix according to each element in calculated similarity matrix;
Specific formulation are as follows:
Aij=(Ti1-fj1)2×a+(Ti2-fj2)2×b+(Ti3-fj3)2×c+(Ti4-fj4) × d, wherein Ti1It is existing The corresponding age information of face, f in any pursuit path in pursuit pathj1For the corresponding age information of face detection block;Ti2For The corresponding gender of face is probabilistic information, the f of male or women in any pursuit path in existing pursuit pathj2For face The corresponding gender of detection block is the probabilistic information of male or women;Ti3-fj3Indicate any tracking rail in existing pursuit path The Euclidean distance of center position between mark and Face datection frame;Ti4-fj4For any pursuit path in existing pursuit path With the characteristic distance between face frame.
In one possible implementation, determining module, specifically for based on detection block information and after passing through training Network model exports the corresponding attribute feature vector of at least one face.
The third aspect provides a kind of electronic equipment, which includes:
One or more processors;
Memory;
One or more application program, wherein one or more application programs be stored in memory and be configured as by One or more processors execute, and one or more programs are configured to: executing the application first aspect or first aspect The corresponding operation of method of face tracking shown in any possible implementation.
Fourth aspect, provides a kind of computer readable storage medium, and storage medium is stored at least one instruction, at least One Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, code set or instruction set are loaded by processor And the method executed to realize the face tracking as shown in first aspect or first aspect any possible implementation.
Technical solution provided by the present application has the benefit that
It is and existing this application provides a kind of method, apparatus of face tracking, electronic equipment and computer readable storage medium There is technology to compare, the application obtains the inspection of at least one face by handling at least frame frame image in video flowing Frame information is surveyed, detection block information is then based on, determines the corresponding attribute information of at least one face, be then based at least one people The detection block information and the corresponding attribute information of at least one face of face, track at least one face.That is the application It not only needs also to be needed according to each detection according to the detection block information in frame image when tracking at least one face The attribute information of face in frame, so as to reduce the pursuit path that multiple faces occur when being tracked to multiple faces Alternate probability promotes the accuracy tracked to face, and then can promote user experience.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, institute in being described below to the embodiment of the present application Attached drawing to be used is needed to be briefly described.
Fig. 1 a is that true track interlocks schematic diagram two target objects in video;
Fig. 1 b is that the tracing path of two target objects the alternate schematic diagram of mistake occurs;
Fig. 1 c is a kind of method flow schematic diagram of face tracking provided by the embodiments of the present application;
Fig. 2 is a kind of apparatus structure schematic diagram of face tracking provided by the embodiments of the present application;
Fig. 3 is a kind of structural schematic diagram of the electronic equipment of face tracking provided by the embodiments of the present application;
Fig. 4 is the form of expression exemplary diagram of detection block in the embodiment of the present application;
Fig. 5 is the flow diagram of a certain application scenarios human face tracking.
Specific embodiment
Embodiments herein is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached The embodiment of figure description is exemplary, and is only used for explaining the application, and is not construed as limiting the claims.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singular " one " used herein, " one It is a ", " described " and "the" may also comprise plural form.It is to be further understood that being arranged used in the description of the present application Diction " comprising " refer to that there are the feature, integer, step, operation, element and/or component, but it is not excluded that in the presence of or addition Other one or more features, integer, step, operation, element, component and/or their group.It should be understood that when we claim member Part is " connected " or when " coupled " to another element, it can be directly connected or coupled to other elements, or there may also be Intermediary element.In addition, " connection " used herein or " coupling " may include being wirelessly connected or wirelessly coupling.It is used herein to arrange Diction "and/or" includes one or more associated wholes for listing item or any cell and all combinations.
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with attached drawing to the application embodiment party Formula is described in further detail.
How the technical solution of the application and the technical solution of the application are solved with specifically embodiment below above-mentioned Technical problem is described in detail.These specific embodiments can be combined with each other below, for the same or similar concept Or process may repeat no more in certain embodiments.Below in conjunction with attached drawing, embodiments herein is described.
The embodiment of the present application provides a kind of method of face tracking, as illustrated in figure 1 c, this method comprises:
Step S101, by handling at least frame frame image in video flowing, the inspection of at least one face is obtained Survey frame information.
For the embodiment of the present application, detection block can be the mark letter for indicating the region of target object in the picture Breath, is identified by arbitrary shape, for example, rectangle, square etc..For example, as shown in figure 4, the square-shaped frame in image is Detection block, for indicating the face area of gorilla in the images.
For the embodiment of the present application, step S101 be can specifically include: at least frame frame image in video flowing is passed through Preset model (detection network), obtains the detection block information of at least one face.
For the embodiment of the present application, detecting network can be using any one of following network structure:
Single network objectives detection framework (single shot multibox detector, SSD) network;SSD and residual error Neural network (Residual Neural Network, ResNet) 18;SSD and ResNet50;SSD and ResNet100; SSD and ShuffleNetV2;Region convolutional neural networks (Region Convolutional Neural Network, RCNN);RCNN and ResNet18;RCNN and ResNet50;RCNN and ResNet100;RCNN and ShuffleNetV2;FasterRCNN;FasterRCNN and ResNet18;FasterRCNN and ResNet50; FasterRCNN and ResNet100;FasterRCNN and ShuffleNetV2;YOLO-v1;YOLO-v1 and ResNet18;YOLO-v1 and ResNet50;YOLO-v1 and ResNet100;YOLO-v1 and ShuffleNetV2; YOLO-v2;YOLO-v2 and ResNet18;YOLO-v2 and ResNet50;YOLO-v2 and ResNet100;YOLO-v2 And ShuffleNetV2;YOLO-v3;YOLO-v3 and ResNet18;YOLO-v3 and ResNet50;YOLO-v3 and ResNet100;YOLO-v3 and ShuffleNetV2.
The alternatively possible implementation of the embodiment of the present application, detection block information include: detection block in this frame image Location information and detection block size at least one of.
Step S102, it is based on detection block information, determines the corresponding attribute information of at least one face.
The alternatively possible implementation of the embodiment of the present application, step S102 can specifically include: be believed based on detection block The network model after training is ceased and passed through, the corresponding attribute feature vector of at least one face is exported.
For the embodiment of the present application, it is based on detection block information, identifies face in any frame frame image;And will identify that Human face image information determines the corresponding attribute feature vector of at least one face by the network model after training.
For the embodiment of the present application, the corresponding attribute information of any face is by preset model (after i.e. above-mentioned training Network model) prediction numerical value.In the embodiment of the present application, if attribute is discrete value (for example, gender), pass through preset model The information of prediction is the probability that the attribute belongs to a certain type;If attribute is successive value (for example, age), pass through preset model Obtain the specific value of the attribute.
For example, for for age attribute, obtaining age attribute by preset model is age numerical value (unit: year), example Such as, 5 years old;It for gender attribute, obtains as the probability of male being 0.8 by preset model, obtains as the probability of women being 0.2.
Since the corresponding attribute information of any face can be to be multiple, the corresponding attribute information of any face includes: any The corresponding attribute feature vector of face.
For the embodiment of the present application, at least one in any frame image is determined by preset model and based on detection block information The accuracy for the corresponding attribute information of at least one face determined can be improved in the corresponding attribute information of face, and also The efficiency for determining the corresponding attribute information of at least one face can be promoted, so can be promoted face tracking accuracy and Efficiency.
Step S103, detection block information and the corresponding attribute information of at least one face based at least one face, At least one face is tracked.
The embodiment of the present application provides a kind of method of face tracking, and compared with prior art, the application passes through to video An at least frame frame image in stream is handled, and is obtained the detection block information of at least one face, is then based on detection block information, It determines the corresponding attribute information of at least one face, is then based on detection block information and at least one people of at least one face The corresponding attribute information of face, tracks at least one face.I.e. the embodiment of the present application at least one face carry out with It not only needs also to need the attribute information according to face in each detection block according to the detection block information in frame image when track, thus Can reduce the alternate probability of pursuit path that multiple faces occur when being tracked to multiple faces, promoted to face into The accuracy of line trace, and then user experience can be promoted.
The alternatively possible implementation of the embodiment of the present application, step S103 can specifically include: step S1031 (figure In be not shown) and step S1032 (not shown), wherein
Step S1031, according to the detection block information of at least one face and the attribute information of at least one face, and Some pursuit path information is matched.
Step S1032, be based on matching result, update existing pursuit path information, with realize at least one face into Line trace processing.
The alternatively possible implementation of the embodiment of the present application, the corresponding attribute information of any face include it is following at least One:
Age information;Gender information;Skin color information;Hair color information;Iris-color information;Ornament information.
The alternatively possible implementation of the embodiment of the present application, step S1031 can specifically include: according to it is existing with The detection block information and attribute information of track trace information and at least one face calculate similarity matrix;Step S1032 is specific It may include: that existing pursuit path information is updated according to similarity matrix.
The alternatively possible implementation of the embodiment of the present application, according to existing pursuit path information and at least one The detection block information and attribute information of face calculate similarity matrix, comprising: according to specific formulation, calculate in similarity matrix Either element;According to each element in calculated similarity matrix, similarity matrix is determined;
Wherein, specific formulation are as follows:
Aij=(Ti1-fj1)2×a+(Ti2-fj2)2×b+(Ti3-fj2)2×c+(Ti4-fj4) × d, wherein Ti1It is existing The corresponding age information of face, f in any pursuit path in pursuit pathj1For the corresponding age information of face detection block;Ti2For The corresponding gender of face is probabilistic information, the f of male or women in any pursuit path in existing pursuit pathj2For face The corresponding gender of detection block is the probabilistic information of male or women;Ti3-fj3Indicate any tracking rail in existing pursuit path The Euclidean distance of center position between mark and Face datection frame;Ti4-fj4For any pursuit path in existing pursuit path With the characteristic distance between face frame.
(original each face corresponding trace information) has n for example, it is assumed that original track, is referred to as T_1, T_2 ..., T_n, and the face in a new frame has m, is referred to as f_1, f_2 ..., f_m.
Wherein, the i-th row jth of calculated similarity matrix A is classified as the difference of the age attribute of track T_i and face f_j Square multiplied by coefficient a, and plus the difference for the probability for predicting the artificial women in the gender attribute of track T_i and face f_j Square multiplied by coefficient b, and plus the Euclidean distance of the center position of the face of track T_i and face f_j square multiplied by being Number c, and plus the characteristic distance of track T_i and face f_j multiplied by coefficient d.
Wherein, above a, b, c and d are constant selected in advance, what " distance of the feature of face " specifically calculated with Method is often determined by specific face recognition algorithms.It would generally be the character representation of each face at a high dimension vector, feature Distance be expressed as two such vectors Euclidean distance square or angle of the two such vector in vector space Cosine value.
The alternatively possible implementation of the embodiment of the present application updates existing pursuit path according to similarity matrix Information, comprising: determine the element for being not more than preset threshold in similarity matrix;Based in similarity matrix be not more than preset threshold Element determine the set on matching side and by bipartite graph best match algorithm, match while set in any matching while characterize The detection block information and attribute information of any group of matched pursuit path information and face;According to the set on matching side, update Existing pursuit path information.
Specifically, using a previously selected threshold value t, for being greater than all elements A_ij of t in similarity matrix, then Think that track T_i and face f_j cannot centainly be matched;(here may be used for the binary group of all possible matched tracks and face The matched definition of energy is identical with the last period words, if A_ij≤t, it is possible to match), use a bipartite graph best match to calculate Method obtains a kind of optimal matching scheme.
Wherein, the output of bipartite graph matching algorithm is the set on a matching side, and each one group of the expression of matching side matched Track and face, and guarantee that any face is only matched to an at most track, and any track is only matched to an at most people Face.
For the embodiment of the present application, bipartite graph is also referred to as bigraph (bipartite graph), is one of graph theory particular module;If G=(V, E) It is a non-directed graph, if vertex V may be partitioned into two mutually disjoint subsets (A, B), and each edge (i, the j) institute in figure Associated two vertex i and j are belonging respectively to the two different vertex sets (i in A, j in B), then figure G is referred to as one two points Figure.
The alternatively possible implementation of the embodiment of the present application updates existing pursuit path information, including step Sa At least one of in (not shown) and step Sb (not shown), wherein
If not including existing pursuit path in the corresponding face information of any frame frame image step Sa, in video flowing to believe Breath then deletes the pursuit path letter not included in the corresponding face information of any frame frame image in existing pursuit path information Breath.
If not including the corresponding face letter of any frame frame image in video flowing in step Sb, existing pursuit path information Breath then adds the corresponding face information of any frame frame image in existing pursuit path information.
Wherein, face information includes: the detection block information and the corresponding attribute information of face of face.
For the embodiment of the present application, for being not matched to the track of face, it is believed that the corresponding face in these tracks is Picture is left, then deletes these tracks in original pursuit path information;For being not matched to the face of track, it is believed that this A little faces are the emerging faces of this frame, then corresponding face information is added in existing pursuit path.
The above-mentioned method for describing target following in detail, it is following to pass through an application scenarios and introduced in a manner of summing-up The method of the target following, specifically as figure 5 illustrates:
Any frame frame image is defeated by pretreated frame image by detection network after pretreatment in video flowing Face datection frame information in the frame image out, and each individual is obtained by face character network based on Face datection frame information The corresponding attribute information of face, by face frame information in the frame image and the corresponding attribute information of each face, And by face tracking module, face tracking information is obtained.
Above-described embodiment introduces the method flow of face tracking from the angle of method flow, and following combination attached drawings are from virtual mould The angle of block introduces the device of face tracking, specific as follows shown:
The embodiment of the present application provides a kind of device of face tracking, as shown in Fig. 2, the device 20 of the face tracking can be with Including processing module 21, determining module 22 and tracking module 23, wherein
Processing module 21, for obtaining at least one people by handling at least frame frame image in video flowing The detection block information of face.
Determining module 22 determines the corresponding attribute information of at least one face for being based on detection block information.
Tracking module 23, for detection block information and the corresponding attribute of at least one face based at least one face Information tracks at least one face.
The alternatively possible implementation of the embodiment of the present application, tracking module 23 include: matching unit and update single Member, wherein
Matching unit, for according to the detection block information of at least one face and the attribute information of at least one face, It is matched with existing pursuit path information.
Updating unit updates existing pursuit path information for the matching result based on matching unit, to realize to extremely A few face carries out tracking processing.
The alternatively possible implementation of the embodiment of the present application, matching unit are specifically used for according to existing tracking rail The detection block information and attribute information of mark information and at least one face calculate similarity matrix.
Updating unit is specifically used for updating existing pursuit path information according to similarity matrix.
The alternatively possible implementation of the embodiment of the present application, updating unit are specifically used for determining in similarity matrix No more than the element of preset threshold.
Updating unit is specifically also used to based on the element in similarity matrix no more than preset threshold, and passes through bipartite graph Best match algorithm, determine matching side set, match while set in any matching while characterize any group of matched tracking rail The detection block information and attribute information of mark information and face.
Updating unit is specifically also used to update existing pursuit path information according to the set on matching side.
The alternatively possible implementation of the embodiment of the present application, updating unit, specifically for when any in video flowing When not including existing pursuit path information in the corresponding face information of frame frame image, deleted in existing pursuit path information The pursuit path information not included in the corresponding face information of any frame frame image;And/or updating unit, being specifically used for ought be When not including the corresponding face information of any frame frame image in video flowing in some pursuit path information, in existing pursuit path The corresponding face information of any frame frame image is added in information.
Wherein, face information includes: the detection block information and the corresponding attribute information of face of face.
The alternatively possible implementation of the embodiment of the present application, the corresponding attribute information of any face include: age letter Breath and gender information at least one of.
The alternatively possible implementation of the embodiment of the present application, matching unit are specifically used for this according to specific formulation, meter Calculate the either element in similarity matrix.
Matching unit is specifically also used to determine similarity matrix according to each element in calculated similarity matrix.
Wherein, specific formulation are as follows:
Aij=(Ti1-fj1)2×a+(Ti2-fj2)2×b+(Ti3-fj3)2×c+(Ti4-fj4) × d, wherein Ti1It is existing The corresponding age information of face, f in any pursuit path in pursuit pathj1For the corresponding age information of face detection block;Ti2For The corresponding gender of face is probabilistic information, the f of male or women in any pursuit path in existing pursuit pathj2For face The corresponding gender of detection block is the probabilistic information of male or women;Ti3-fj3Indicate any tracking rail in existing pursuit path The Euclidean distance of center position between mark and Face datection frame;Ti4-fj4For any pursuit path in existing pursuit path With the characteristic distance between face frame.
The alternatively possible implementation of the embodiment of the present application, determining module 22 are specifically used for being based on detection block information And by the network model after training, the corresponding attribute feature vector of at least one face is exported.
The embodiment of the present application provides a kind of device of face tracking, and compared with prior art, the embodiment of the present application passes through An at least frame frame image in video flowing is handled, the detection block information of at least one face is obtained, is then based on detection Frame information determines the corresponding attribute information of at least one face, is then based on the detection block information and extremely of at least one face Few corresponding attribute information of a face, tracks at least one face.I.e. the embodiment of the present application is at least one people It not only needs also to need the attribute according to face in each detection block according to the detection block information in frame image when face is tracked Information is promoted so as to reduce the alternate probability of pursuit path that multiple faces occur when being tracked to multiple faces To the accuracy that face is tracked, and then user experience can be promoted.
The method that a kind of face tracking that above method embodiment provides can be performed in the device of the face tracking of the present embodiment, Its realization principle is similar, and details are not described herein again.
The method that above-described embodiment describes face tracking from the angle of method flow, and be situated between from the angle of virtual module Continued the device of face tracking, and following combination attached drawings describe a kind of electronic equipment from the angle of entity apparatus, to execute face The method of tracking, specific as follows shown:
The embodiment of the present application provides a kind of electronic equipment, as shown in figure 3, electronic equipment shown in Fig. 3 3000 includes: place Manage device 3001 and memory 3003.Wherein, processor 3001 is connected with memory 3003, is such as connected by bus 3002.It is optional Ground, electronic equipment 3000 can also include transceiver 3004.It should be noted that transceiver 3004 is not limited to one in practical application A, the structure of the electronic equipment 3000 does not constitute the restriction to the embodiment of the present application.
Processor 3001 can be CPU, general processor, DSP, ASIC, FPGA or other programmable logic device, crystalline substance Body pipe logical device, hardware component or any combination thereof.It, which may be implemented or executes, combines described by present disclosure Various illustrative logic blocks, module and circuit.Processor 3001 is also possible to realize the combination of computing function, such as wraps It is combined containing one or more microprocessors, DSP and the combination of microprocessor etc..
Bus 3002 may include an access, and information is transmitted between said modules.Bus 3002 can be pci bus or Eisa bus etc..Bus 3002 can be divided into address bus, data/address bus, control bus etc..Only to be used in Fig. 3 convenient for indicating One thick line indicates, it is not intended that an only bus or a type of bus.
Memory 3003 can be ROM or can store the other kinds of static storage device of static information and instruction, RAM Or the other kinds of dynamic memory of information and instruction can be stored, it is also possible to EEPROM, CD-ROM or other CDs Storage, optical disc storage (including compression optical disc, laser disc, optical disc, Digital Versatile Disc, Blu-ray Disc etc.), magnetic disk storage medium Or other magnetic storage apparatus or can be used in carry or store have instruction or data structure form desired program generation Code and can by any other medium of computer access, but not limited to this.
Memory 3003 is used to store the application code for executing application scheme, and is held by processor 3001 to control Row.Processor 3001 is for executing the application code stored in memory 3003, to realize aforementioned either method embodiment Shown in content.
The embodiment of the present application provides a kind of electronic equipment, the electronic equipment in the embodiment of the present application include: memory and Processor;At least one program is stored in the memory, when for being executed by the processor, compared with prior art Can be achieved: the embodiment of the present application obtains at least one face by handling at least frame frame image in video flowing Detection block information is then based on detection block information, determines the corresponding attribute information of at least one face, is then based at least one The detection block information and the corresponding attribute information of at least one face of face, track at least one face.That is this Shen Please embodiment not only need also to need foundation according to the detection block information in frame image when tracking at least one face So as to reduce when being tracked to multiple faces multiple faces occur for the attribute information of face in each detection block The alternate probability of pursuit path promotes the accuracy tracked to face, and then can promote user experience.
The method that a kind of face tracking that above method embodiment provides can be performed in the electronic equipment of the present embodiment is realized Principle is similar, and details are not described herein again.
The embodiment of the present application provides a kind of computer readable storage medium, is stored on the computer readable storage medium Computer program allows computer to execute corresponding contents in preceding method embodiment when run on a computer.With The prior art is compared, and the embodiment of the present application obtains at least one by handling at least frame frame image in video flowing The detection block information of face, is then based on detection block information, determines the corresponding attribute information of at least one face, be then based on to The detection block information and the corresponding attribute information of at least one face of a few face, track at least one face. I.e. the embodiment of the present application is not only needed when tracking at least one face according to the detection block information also need in frame image To occur multiple according to the attribute information of face in each detection block, so as to reduce when being tracked to multiple faces The alternate probability of the pursuit path of face promotes the accuracy tracked to face, and then can promote user experience.
The computer readable storage medium of the present embodiment is suitable for a kind of face tracking that above method embodiment provides Method, realization principle is similar, and details are not described herein again.
It should be understood that although each step in the flow chart of attached drawing is successively shown according to the instruction of arrow, These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps Execution there is no stringent sequences to limit, can execute in the other order.Moreover, at least one in the flow chart of attached drawing Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps Completion is executed, but can be executed at different times, execution sequence, which is also not necessarily, successively to be carried out, but can be with other At least part of the sub-step or stage of step or other steps executes in turn or alternately.
The above is only some embodiments of the invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered It is considered as protection scope of the present invention.

Claims (11)

1. a kind of method of face tracking characterized by comprising
By handling at least frame frame image in video flowing, the detection block information of at least one face is obtained;
Based on detection block information, the corresponding attribute information of at least one face is determined;
Detection block information and the corresponding attribute information of at least one described face based at least one face, at least One face is tracked.
2. the method according to claim 1, wherein detection block information based at least one face and The corresponding attribute information of described at least one face, tracks at least one face, comprising:
According to the detection block information of at least one face and the attribute information of at least one face, with it is existing with Track trace information is matched;
Based on matching result, the existing pursuit path information is updated, at least one described face is tracked with realizing Processing.
3. according to the method described in claim 2, it is characterized in that, according to the detection block information of at least one face and The attribute information of at least one face is matched with existing pursuit path information;Based on matching result, described in update Existing pursuit path information, comprising:
According to the existing pursuit path information and the detection block information and attribute information of at least one face, calculate similar Spend matrix;
According to similarity matrix, the existing pursuit path information is updated.
4. according to the method described in claim 3, it is characterized in that, updating the existing tracking rail according to similarity matrix Mark information, comprising:
Determine the element for being not more than preset threshold in the similarity matrix;
Based on the element in the similarity matrix no more than preset threshold, and pass through bipartite graph best match algorithm, determining The set of cobordant, the matching while set in any matching while characterize any group of matched pursuit path information and face Detection block information and attribute information;
According to the set on the matching side, the existing pursuit path information is updated.
5. according to the described in any item methods of claim 2-4, which is characterized in that described to update the existing pursuit path letter Breath, includes at least one of the following:
If not including the existing pursuit path information in the corresponding face information of any frame frame image in video flowing, The pursuit path not included in the corresponding face information of any frame frame image is deleted in the existing pursuit path information Information;
If not including the corresponding face information of any frame frame image in video flowing in the existing pursuit path information, The corresponding face information of any frame frame image is added in the existing pursuit path information;
The face information includes: the detection block information and the corresponding attribute information of face of face.
6. method according to claim 1-5, which is characterized in that the corresponding attribute information of any face include with It is at least one of lower:
Age information;Gender information.
7. the method according to claim 3 or 4, which is characterized in that according to the existing pursuit path information and extremely The detection block information and attribute information of a few face, calculate similarity matrix, comprising:
According to specific formulation, the either element in similarity matrix is calculated;
According to each element in calculated similarity matrix, similarity matrix is determined;
The specific formulation are as follows:
Aij=(Ti1-fj1)2×a+(Ti2-fj2)2×b+(Ti3-fj3)2×c+(Ti4-fj4) × d, wherein Ti1For existing tracking The corresponding age information of face, f in any pursuit path in trackj1For the corresponding age information of face detection block;Ti2It is existing Pursuit path in any pursuit path the corresponding gender of face be male or women probabilistic information, fj2For Face datection The corresponding gender of frame is the probabilistic information of male or women;Ti3-fj3Indicate in existing pursuit path any pursuit path with The Euclidean distance of center position between Face datection frame;Ti4-fj4For any pursuit path and people in existing pursuit path Characteristic distance between face frame.
8. method according to claim 1-7, which is characterized in that be based on detection block information, determine at least one The corresponding attribute information of face, comprising:
Based on the detection block information and by the network model after training, it is special to export the corresponding attribute of at least one described face Levy vector.
9. a kind of device of face tracking characterized by comprising
Processing module, for obtaining the inspection of at least one face by handling at least frame frame image in video flowing Survey frame information;
Determining module determines the corresponding attribute information of at least one face for being based on detection block information;
Tracking module, for detection block information and the corresponding category of at least one described face based at least one face Property information, tracks at least one face.
10. a kind of electronic equipment, characterized in that it comprises:
One or more processors;
Memory;
One or more application program, wherein one or more of application programs are stored in the memory and are configured To be executed by one or more of processors, one or more of programs are configured to: being executed according to claim 1~8 The method of described in any item face trackings.
11. a kind of computer readable storage medium, which is characterized in that the storage medium is stored at least one instruction, at least One Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, the code set or instruction set Loaded by the processor and executed the method to realize face tracking as described in any of the claims 1 to 8.
CN201910262510.9A 2019-04-02 2019-04-02 Face tracking method and device, electronic equipment and computer readable storage medium Active CN110009662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910262510.9A CN110009662B (en) 2019-04-02 2019-04-02 Face tracking method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910262510.9A CN110009662B (en) 2019-04-02 2019-04-02 Face tracking method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110009662A true CN110009662A (en) 2019-07-12
CN110009662B CN110009662B (en) 2021-09-17

Family

ID=67169613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910262510.9A Active CN110009662B (en) 2019-04-02 2019-04-02 Face tracking method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110009662B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427905A (en) * 2019-08-08 2019-11-08 北京百度网讯科技有限公司 Pedestrian tracting method, device and terminal
CN111178217A (en) * 2019-12-23 2020-05-19 上海眼控科技股份有限公司 Method and equipment for detecting face image
CN111862624A (en) * 2020-07-29 2020-10-30 浙江大华技术股份有限公司 Vehicle matching method and device, storage medium and electronic device
CN113034548A (en) * 2021-04-25 2021-06-25 安徽科大擎天科技有限公司 Multi-target tracking method and system suitable for embedded terminal

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306290A (en) * 2011-10-14 2012-01-04 刘伟华 Face tracking recognition technique based on video
CN103632126A (en) * 2012-08-20 2014-03-12 华为技术有限公司 Human face tracking method and device
US20140191948A1 (en) * 2013-01-04 2014-07-10 Samsung Electronics Co., Ltd. Apparatus and method for providing control service using head tracking technology in electronic device
CN105488478A (en) * 2015-12-02 2016-04-13 深圳市商汤科技有限公司 Face recognition system and method
CN107316322A (en) * 2017-06-27 2017-11-03 上海智臻智能网络科技股份有限公司 Video tracing method and device and object identifying method and device
CN107851192A (en) * 2015-05-13 2018-03-27 北京市商汤科技开发有限公司 For detecting the apparatus and method of face part and face
CN108230352A (en) * 2017-01-24 2018-06-29 北京市商汤科技开发有限公司 Detection method, device and the electronic equipment of target object
CN108491832A (en) * 2018-05-21 2018-09-04 广西师范大学 A kind of embedded human face identification follow-up mechanism and method
CN108932456A (en) * 2017-05-23 2018-12-04 北京旷视科技有限公司 Face identification method, device and system and storage medium
US20190050629A1 (en) * 2017-08-14 2019-02-14 Amazon Technologies, Inc. Selective identity recognition utilizing object tracking
CN109522843A (en) * 2018-11-16 2019-03-26 北京市商汤科技开发有限公司 A kind of multi-object tracking method and device, equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306290A (en) * 2011-10-14 2012-01-04 刘伟华 Face tracking recognition technique based on video
CN103632126A (en) * 2012-08-20 2014-03-12 华为技术有限公司 Human face tracking method and device
US20140191948A1 (en) * 2013-01-04 2014-07-10 Samsung Electronics Co., Ltd. Apparatus and method for providing control service using head tracking technology in electronic device
CN107851192A (en) * 2015-05-13 2018-03-27 北京市商汤科技开发有限公司 For detecting the apparatus and method of face part and face
CN105488478A (en) * 2015-12-02 2016-04-13 深圳市商汤科技有限公司 Face recognition system and method
CN108230352A (en) * 2017-01-24 2018-06-29 北京市商汤科技开发有限公司 Detection method, device and the electronic equipment of target object
CN108932456A (en) * 2017-05-23 2018-12-04 北京旷视科技有限公司 Face identification method, device and system and storage medium
CN107316322A (en) * 2017-06-27 2017-11-03 上海智臻智能网络科技股份有限公司 Video tracing method and device and object identifying method and device
US20190050629A1 (en) * 2017-08-14 2019-02-14 Amazon Technologies, Inc. Selective identity recognition utilizing object tracking
CN108491832A (en) * 2018-05-21 2018-09-04 广西师范大学 A kind of embedded human face identification follow-up mechanism and method
CN109522843A (en) * 2018-11-16 2019-03-26 北京市商汤科技开发有限公司 A kind of multi-object tracking method and device, equipment and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HAO JI等: "Multiple faces tracking based on joint kernel density estimation and robust feature descriptors", 《2009 IEEE INTERNATIONAL CONFERENCE ON NETWORK INFRASTRUCTURE AND DIGITAL CONTENT》 *
RANGANATHA S等: "Color Based New Algorithm for Detection and Single/Multiple Person Face Tracking in Different Background Video Sequence", 《I.J. INFORMATION TECHNOLOGY AND COMPUTER SCIENCE》 *
杨家琳: "基于多目标跟踪的非头肩人脸跟踪研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王蓉等: "基于OpenCV的人脸检测与跟踪方法实现", 《科学技术与工程》 *
赵曦敏: "视频中的人脸检测与跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427905A (en) * 2019-08-08 2019-11-08 北京百度网讯科技有限公司 Pedestrian tracting method, device and terminal
CN111178217A (en) * 2019-12-23 2020-05-19 上海眼控科技股份有限公司 Method and equipment for detecting face image
CN111862624A (en) * 2020-07-29 2020-10-30 浙江大华技术股份有限公司 Vehicle matching method and device, storage medium and electronic device
CN111862624B (en) * 2020-07-29 2022-05-03 浙江大华技术股份有限公司 Vehicle matching method and device, storage medium and electronic device
CN113034548A (en) * 2021-04-25 2021-06-25 安徽科大擎天科技有限公司 Multi-target tracking method and system suitable for embedded terminal

Also Published As

Publication number Publication date
CN110009662B (en) 2021-09-17

Similar Documents

Publication Publication Date Title
Zou et al. Object detection in 20 years: A survey
Maninis et al. Video object segmentation without temporal information
CN110009662A (en) Method, apparatus, electronic equipment and the computer readable storage medium of face tracking
He et al. Connected component model for multi-object tracking
Li et al. Instance-level salient object segmentation
Zang et al. Attention-based temporal weighted convolutional neural network for action recognition
CN110853033B (en) Video detection method and device based on inter-frame similarity
Li et al. Depthwise nonlocal module for fast salient object detection using a single thread
CN111783713B (en) Weak supervision time sequence behavior positioning method and device based on relation prototype network
CN111052128B (en) Descriptor learning method for detecting and locating objects in video
Nguyen et al. Few-shot object counting and detection
Guo et al. Fully convolutional network for multiscale temporal action proposals
Liu et al. Integrating part-object relationship and contrast for camouflaged object detection
Ma et al. CapsuleRRT: Relationships-aware regression tracking via capsules
Masi et al. Towards learning structure via consensus for face segmentation and parsing
Vo et al. Active learning strategies for weakly-supervised object detection
CN113392864A (en) Model generation method, video screening method, related device and storage medium
CN111027555A (en) License plate recognition method and device and electronic equipment
CN111241326B (en) Image visual relationship indication positioning method based on attention pyramid graph network
CN112347965A (en) Video relation detection method and system based on space-time diagram
CN116958267A (en) Pose processing method and device, electronic equipment and storage medium
Mucha et al. Depth and thermal images in face detection-a detailed comparison between image modalities
CN116363565A (en) Target track determining method and device, electronic equipment and storage medium
Dai et al. OAMatcher: An overlapping areas-based network with label credibility for robust and accurate feature matching
Kavitha et al. Novel Fuzzy Entropy Based Leaky Shufflenet Content Based Video Retrival System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant