CN103310204B - Feature based on increment principal component analysis mates face tracking method mutually with model - Google Patents

Feature based on increment principal component analysis mates face tracking method mutually with model Download PDF

Info

Publication number
CN103310204B
CN103310204B CN201310267907.XA CN201310267907A CN103310204B CN 103310204 B CN103310204 B CN 103310204B CN 201310267907 A CN201310267907 A CN 201310267907A CN 103310204 B CN103310204 B CN 103310204B
Authority
CN
China
Prior art keywords
face
model
point
facial image
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310267907.XA
Other languages
Chinese (zh)
Other versions
CN103310204A (en
Inventor
吴怀宇
潘春洪
陈艳琴
赵两可
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201310267907.XA priority Critical patent/CN103310204B/en
Publication of CN103310204A publication Critical patent/CN103310204A/en
Application granted granted Critical
Publication of CN103310204B publication Critical patent/CN103310204B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of feature based on online increment principal component analysis and mate face tracking method mutually with model, the method comprises the following steps: several facial images are carried out off-line modeling, obtains Model Matching (CLM) model A;Treating each frame in track human faces video and carry out critical point detection, the set of all key points and robust thereof describe subgroup and are combined into Critical point model B;Based on described Critical point model B, each frame in described face video to be tracked is carried out key point coupling, it is thus achieved that human face posture parameter group initial in each frame facial image;Utilize model A to treat track human faces video and carry out CLM face tracking;Follow the tracks of again based on Initial Face attitude parameter group and first secondary tracking result;More new model A repeat the above steps, obtain final face tracking result.The present invention solves when carrying out CLM face tracking, follows the tracks of, due to what adjacent interframe in target image occurred when changing greatly, the problem lost, thus improves the precision of tracking.

Description

Feature based on increment principal component analysis mates face tracking method mutually with model
Technical field
The present invention relates to computer graphic image technical field, a kind of high robust based on The feature of line increment principal component analysis mates face tracking method mutually with model.
Background technology
In recent years, computer vision technique achieves significant progress, and image recognition has become with tracking For one popular research direction of computer realm.The Real-time Face Tracking of robust is intelligent video monitoring One core in the field such as man-machine interaction with view-based access control model and robot navigation.This technology is applied to The various fields such as video conference, police criminal detection, access control, financial payment, medical application.Face Being a nonrigid identification object, at the volley, its size, the change of shape all can affect tracking Effect, so real-time face tracking is to computer vision field challenge.
At present face tracking technology can be largely classified into three classes: the tracking of feature based coupling, based on district The tracking of territory coupling and tracking based on Model Matching.
Tracking for feature based coupling: the method carries out the tracking of sequence image moving target, its Including feature extraction and two processes of characteristic matching.Characteristic extraction procedure needs select suitable with Track feature, and in the next frame of sequence image, extract these features;During characteristic matching, By the same previous frame of feature of the present frame extracted, or it is used for determining the feature templates of target object Compare, determine whether it is corresponding object according to comparative result, thus complete tracking process.But Characteristic point can due to block or light change and invisible, this will cause following the tracks of unsuccessfully, and this is based on spy The shortcoming levying matched jamming.
For tracking based on Region Matching: the method is the connected region of target object in image Common characteristic information is as a kind of method of tracing detection value.Continuous print image can use multiple Area information.Tracking based on Region Matching can not be entered following the tracks of result according to the global shape of target , therefore when following the tracks of continuously for a long time, easily there are the feelings of track rejection because of error accumulation in Row sum-equal matrix Condition.
For tracking based on Model Matching: the method is to represent needs by the method setting up model The target object followed the tracks of, then follows the tracks of the purpose that this model reaches to follow the tracks of in sequence image.Mesh Before mainly have two kinds of deformable model, one is all-in deformable model, if meet Some simple regularization constraint conditions (such as seriality, flatness etc.), it is possible to be used for following the tracks of arbitrarily The target object of shape, this kind of method is also generally referred to as movable contour model;Another kind is parameter shape The distorted pattern of formula, it uses a parameter equation, or an original shape and a deformation formula are come The common shape describing target object.
Therefore, the face tracking technology of current main flow, still can not be in the premise ensureing robustness Go down and trace into face accurately.
Summary of the invention
In order to solve problem of the prior art, it is an object of the invention to provide the face of a kind of high robust Tracking technique.
In order to reach described purpose, the present invention proposes the online increment main constituent of a kind of high robust and divides The feature of analysis mates face tracking method mutually with model, and it is (crucial that the method combines feature based coupling Point matching) and tracking based on constraint partial model coupling (CLM), it is simultaneously introduced online increment Main constituent learns, and allows CLM model A and key point point model B be mutually matched, and real-time update, With making the precision detected and robustness to ensureing well, and the people of large viewing can be solved Face tracking problem.
The feature of the online increment principal component analysis of described high robust mates face tracking mutually with model Method comprises the following steps:
Step S1, carries out off-line modeling for several facial images, obtains including shape s and stricture of vagina Reason model wTModel Matching (CLM) model A;
Step S2, inputs a face video to be tracked, each in described face video to be tracked Frame facial image carries out critical point detection, by the set of obtained all key points and these key points Robust describe subgroup and be together as Critical point model B;
Step S3, the Critical point model B obtained based on described step S2, for described people to be tracked Each frame facial image in face video carries out key point coupling, it is thus achieved that initial in each frame facial image Human face posture parameter group (R, T), wherein, R represents that angle parameter and T represent displacement parameter;
Step S4, utilizes described model A that described face video to be tracked is carried out CLM face tracking, Obtain the position of characteristic point in described face video to be tracked each frame facial image;
Step S5, based on the attitude parameter of face in each frame facial image that described step S3 obtains Group and described step S4 follow the tracks of the position of characteristic point in each frame facial image obtained, and treat described Each frame facial image in track human faces video carries out following the tracks of again of face;
Step S6, uses increment PCA method to be updated described model A, and after utilizing renewal Model A repeating said steps S1-S5, obtain final face tracking result.
The invention has the beneficial effects as follows: present invention incorporates feature based coupling (key point coupling) and Face tracking based on Model Matching (CLM), is simultaneously introduced online incremental learning, allows CLM model A and Critical point model B is mutually matched, and real-time update so that precision and the robustness of detection obtain To ensureing well, the inventive method can solve the face tracking problem of large viewing.
Accompanying drawing explanation
Fig. 1 is based on online increment main constituent study according to an embodiment of the invention feature and model Coupling face tracking method flow chart mutually;
Fig. 2 is to align, according to the inventive method, the result schematic diagram that face is tracked;
Fig. 3 is the tracking result schematic diagram according to the inventive method face to there is Small-angle Rotation;
Fig. 4 is to there is the tracking result schematic diagram of face that larger angle rotates according to the inventive method.
Detailed description of the invention
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with concrete real Execute example, and referring to the drawings, the present invention is described in more detail.
Fig. 1 is that present invention feature based on online increment principal component analysis mates face tracking mutually with model Method flow diagram, as it is shown in figure 1, said method comprising the steps of:
Step S1, carries out off-line modeling for several facial images, obtain Model Matching (CLM, Constrained Local Model) model A;
Described CLM model A includes shape s and texture model wT, therefore in this step, Step to CLM model A further includes steps of
Step S11, according to pre-determined common facial contour to several facial images described respectively Carry out demarcation and obtain multiple feature point for calibration, and set up according to the coordinate figure of the plurality of feature point for calibration Obtain a face shape model s;
In a CLM model A, shape is defined as a net being made up of a series of vertex positions Lattice, therefore can define a face shape vector s with the coordinate on a series of summitm:
sm=(x1, y1, x2, y2..., xn, yn)T (1)
Wherein, xi, yiBeing respectively the coordinate figure on i-th summit in corresponding facial image, n is actual employing The number on summit, can be set to 66,88 etc..
The coordinate on described summit is artificially to mark according to described pre-determined common facial contour Fixed, described summit is also called feature point for calibration, and specifically, described step S11 farther includes Following steps:
Step S111, gathers N in advance and opens facial image, and wherein, N is one and is more than the most whole of n Number, every facial image is all artificially demarcated according to described common facial contour, thus obtains Multiple feature point for calibration, described common facial contour includes outside eyes, nose, mouth and face Profile, and then N number of face shape vector s can be obtained according to formula (1)m, wherein, m represents The m that N opens in facial image opens facial image.
Step S112, is obtaining face shape vector smOn the basis of, described face shape model s Can be with an average face shape s0Linear combination plus u orthogonal face shape vector si comes Represent, it may be assumed that
s = s 0 + Σ i = 1 u p i s i - - - ( 2 )
Wherein, piIt is form parameter, s0For average face shape, siFor relative to average face shape Change, pi、s0And siBy to the N number of face shape vector s collectedmCarry out principal component analysis (Principal Component Analysis-PCA) obtains: s0For N number of face shape vector smEqual Value, m=1 ... N;piIt is u characteristic vector s obtained after described principal component analysisiCorresponding power Value.It should be noted that before carrying out principal component analysis, need described N number of face shape vector Sm carries out Proust analysis (procrustes analysis) respectively, to reduce deflection, yardstick, translation etc. Error, described Proust analysis is analysis method commonly used in the prior art, does not repeats at this.
Step 12, each feature point for calibration obtained based on described step S11, study obtains can Embody the texture of the textural characteristics with a certain size region corresponding to each feature point for calibration described Model wT
The foundation of described texture model can take various ways to carry out, in an embodiment of the present invention, Support vector machine (Support Vector Machine-SVM) is used to set up described texture model, tool Body process includes:
Step S121, with each the demarcation feature of each facial image that described step S11 obtains Centered by Dian, intercept region that size is r × r as positive sample, any in respective image The region of the multiple r × r sizes of other positions intercepting is as negative sample, so for N width facial image The feature point for calibration that will obtain having same implication (such as in different facial images, refers to a left side The feature point for calibration at canthus is considered the feature point for calibration with same implication) N number of positive sample and many Individual negative sample;
Step S122, based on the sample group corresponding to each feature point for calibration, utilizes support vector machine (SVM) the texture model w that each feature point for calibration is corresponding is obtainedT
In this step, first by each sample (bag in the sample group corresponding to each feature point for calibration Include positive sample and negative sample) write as mathematical form:
x ( i ) = [ x 1 ( i ) , x 2 ( i ) , . . . , x r × r ( i ) ] T - - - ( 3 )
Wherein, (i) represents the sequence number of respective sample,Pixel value for position a certain in respective sample.
Then, SVM is utilized to obtain the texture model w that each feature point for calibration is correspondingT
For SVM, its learning process can be expressed as:
y(i)=wT·x(i)+θ (4)
Wherein, y(i)For the output of SVM, wTIt is the texture model learning to obtain: wT=[w1 w2...wr×r], θ is the side-play amount of SVM, for corresponding to each feature point for calibration Positive sample, y(i)=1;Negative sample, y(i)=0.
Mating with the key point of off-line followed by the structure of Critical point model B, this part includes step Rapid S2-S3, it is primarily to obtain and stablize effective key point, by emulating each of face surface Plant deformation, learn the key point to these deformation robusts as much as possible, thus obtain stable key point Coupling.
Step S2, inputs a face video to be tracked, each in described face video to be tracked Frame facial image carries out critical point detection, by the set of obtained all key points and these key points Robust describe subgroup and be together as Critical point model B;
The detection commonly used approach of key point is to be identified detection according to the key point in image. Compare straight, curved line fragment and point-like, utilize key point to identify that the advantage of image information is to go out Preferably coupling still can be made in the case of existing crowded (blocking), big yardstick and direction change.
Characteristic existing for face, the study of key point also exists many problems: 1, different Angle, expression, illumination variation can produce the geometry of face shape and optic twist distortion;2, Few than background of texture on face, there is difficulty (key point is the fewest) in difference;3, key point exists Location estimation in three-dimensional is the most accurate.But, employing invariant features point can be effective as key point Solve the problems referred to above, particularly as follows: the 3D key point that existing 3D faceform will be utilized to detect Collection and the 2D key point set utilizing multi-view image to detect are simulated deformation, thus find out stable 2D key point.
Study and its robust that the step of described critical point detection comprises key point describe sub study, its Middle robust describes sub study and is to ensure that the stability of key point.
The study of described key point further includes steps of
Step S21, for each two field picture in described human face image sequence, uses in prior art Conventional key point computational methods, such as Fast algorithm, primary Calculation obtains multiple key point;
Step S22, selects the pass with invariance from multiple key points that described step S21 obtains Key point, and the invariance key point of all images in described face video to be tracked is combined Description son (f to set of keypoints and these key pointsi, xi, yi), wherein, fiRepresent that i-th is crucial The eigenvalue of point, (xi, yi) represent the coordinate of this key point;
Described invariance key point is still for closing after attitude rotation, expression shape change and/or illumination variation The key point of key point, uses the change of parameter set (Pose, Shape, Light), simulates people in this step Attitude rotation, expression shape change and the illumination variation of face, wherein, Pose refers to the attitude rotation of little scope The partial occlusion quoted, Shape refers to the change of nonrigid human face expression, and Light refers to light The complicated change illuminated the way, such as shade etc.;Make W (k0;Pose, Shape, Light) represent image I0 On a certain key point k0The position corresponding to some k obtained under above-mentioned three kinds of conversion, if this pass Key point k0After above-mentioned conversion, still it is checked as key point, and meets following formula, be considered as key point k0For above-mentioned conversion there is invariance:
Fk0-Fk< t (5)
Wherein, Fk0For key point k0Eigenvalue, FkFor the eigenvalue of a k, t is the position error allowed The upper limit.
Key point in the set of keypoints obtained at present describes son and does not have robustness, next needs Robust description of key point is obtained by the method for study.In an embodiment of the present invention, described Key point robust describes the method that the study of son uses incremental learning.From the foregoing, it can be understood that key point conduct The main recognition factor of detection image, it should there is invariance (such as direction, yardstick, rotation Deng invariance).But under many circumstances, the local appearance performance of key point may be in direction Change with on yardstick, the most even there is affine deformation, therefore to more accurately to key point Mate, need key point is effectively distinguished description, extract its local direction, yardstick Estimating with the framework rotated, formed and describe son, such as sift is exactly a kind of description that can use, Utilize this description key point can be carried out resampling.
The robust of described key point describes the study of son and comprises the following steps:
Step S23, according to described step S21 and the description of S22, regards for described face to be tracked Each two field picture in Pin carries out critical point detection, obtains n invariance key point, and this n constant Property key point constitute initial key point set A;
Step S24, carries out a certain parameter transformation to the face in described image, and emulation is converted After image, wherein, described parameter is shape, pose or light;
Step S25, according to described step S21 and the description of S22, closes the image after conversion Key spot check records multiple invariance key point, and these invariance key points constitute set of keypoints B;
Step S26, to each some p in set of keypoints B, is described with set of keypoints A Sub-matching operation:
In set of keypoints A, finding the some q nearest with putting p position, comparison point p is anti-with some q Project to three-dimensional point p on threedimensional model surface, front ' and q ', it is judged that p ' and q ' is same three Dimension point, if p ' and q ' be same three-dimensional point and p description describing son and q closest to, It is effective for then representing that this describes son, then description of p is joined in description of q, so Q describes son with regard to many one;If p ' and q ' is description and the set of same three-dimensional point and some p In A another x of non-q description son closest to, then some q and describe son invalid;If P ' and q ' is that description of each point in son and set A that describes of same three-dimensional point and some p differs Sample, then illustrate that its son that describes is categorized as background, so being added by description of p and p by wrong Enter in set A;If p ' and q ' is not description of same three-dimensional point and p and gathers in A Certain some s certain describe son very close to, this shows that a s easily causes error hiding, thus will some a s And description removes from set A;If p ' and q ' is not the description of same three-dimensional point and p In son and set A, description of each key point is different, then a p and description thereof are joined In set A;
Step S27, arrives for the image repeating said steps 24 after other different parameters convert 26, finally give the complete key point robust of each two field picture in described face video to be tracked and describe Son.
Obtained all set of keypoints and robust is described after just obtained Critical point model B.
Step S3, the Critical point model B obtained based on described step S2, for described people to be tracked Each frame facial image in face video carries out key point coupling, it is thus achieved that initial in each frame facial image Human face posture parameter group, wherein, described attitude parameter group includes angle parameter R and displacement parameter T: (R, T);
The process of described key point coupling is the process of the poor opposite sex, generally coupling to as if before and after The image sequence that picture frame change is smaller.
Specifically, described step S3 further includes steps of
Step S31, obtains a certain frame facial image of described face video according to described step S2 The key point of a upper facial image frame, and in this frame facial image, find certain of a facial image frame Key point near one key point relevant position in the current frame;
Step S32, by the sub and described Critical point model B of description of key point described in present frame Describe son to mate, can be with institute by what present frame utilized existing 3D faceform detects The sub 3D key point the matched composition that describes stated in Critical point model B gathers V, by present frame U is gathered, by described with the sub 2D key point the matched composition that describes in described Critical point model B Set V by select attitude parameter group (R, T), and the intrinsic parameter K of photographic head (this parameter can It is determined in advance with the method by demarcating), obtain described set V 2D after plane projection and close Key point set u ', compares u ' and u, asks for so that face initial in this minimum for | | u-u ' | | two field picture Attitude parameter group (R, T) relative to positive face:
( R , T ) * = arg min ( R , T ) Σ i N k | | K [ R | T ] V i - u i | | 2 2 - - - ( 6 )
Wherein, K is camera parameters, and R is angle matrix, and T is motion vector, [R | T] it is by R and T The augmented matrix of composition, ViForThe 3D describing son in described Critical point model B can be mated crucial Point set, uiFor the 2D set of keypoints describing son in described Critical point model B can be mated, I is the sequence number of key point, NkFor set V and set utThe maximum of key point number.
Followed by CLM face tracking, this part is mainly realized by step S4.
Step S4, utilizes described model A that described face video to be tracked is carried out CLM face tracking, Obtain the position of characteristic point in described face video to be tracked each frame facial image, and and then based on institute State the initial human face posture parameter group that step S3 obtains, obtain described face video to be tracked each Human face posture parameter group after frame face image correcting;
This step realizes the tracking of human face characteristic point by fit operation, and described matching is exactly right in fact One model carries out parameter adjustment, obtains an instance model, and makes described instance model and new input figure The process that sheet matches, this is an energy minimization problem.
Described step S4 further includes steps of
Step S41, carries out face to a certain present frame facial image in described face video to be tracked Detection, obtains n initial characteristic point, and be correspondingly made available each characteristic point response image R (x, y);
In this step, first with Viola-Jones method commonly used in the prior art to this frame face figure As carrying out face detection, obtain the face area of a little scope;Then in this face area, just One face mask model of beginningization, this model can be s mentioned above0.So, according at the beginning of described The face mask model of beginningization just obtains n the initial characteristic point of this frame facial image.
Wherein, (x y) is w to the response image R of each characteristic pointT·x(i)Result after matrixing, its In, wTIt is the texture model of this feature point tried to achieve according to SVM: wT=[w1 w2...wr×r], x(i)The sample that i-th size is r*r for this feature point, it is seen then that described response image R (x, y) Size is r*r, and in fact, described response image is equivalent to use texture model wTIt is filtered in the sample Obtained result.
Step S42, (x y), obtains described face to be tracked by matching to utilize described response image R Video each frame facial image has the characteristic point of same implication with described present frame facial image Position;
Described matching further includes steps of
Step S421, to each feature in all characteristic points to obtain in described step S41 Size centered by Dian is that the region of r × r carries out the search that scope is a × a, for each characteristic point, Obtain one centered by it, the square area of a diameter of (r+a);
, and when learning, range of the sample is, the size step of response image in itself namely fit procedure S422, in the case of the characteristic point of known present frame, finds present frame by the way of Function Fitting Next frame or previous frame image described square area in have with described present frame facial image The coordinate position of the characteristic point of same implication.
For described matching, need to find to make matching in next frame or previous frame image obtain The positive sample of each characteristic point and respective response image R (x, the function parameter that mean square deviation y) is preferably minimized Optimal solution.In order to make this optimal solution can reach the purpose of the overall situationization and non-locality, in the present invention one In embodiment, use quadratic function to be fitted, be now accomplished by finding and enable to represented by (8) formula Object function in minimum (7) formula of mean square deviation in function parameter a ', b ', c ', at this Invent in an embodiment, function parameter a ', b ', c ' can be tried to achieve by the method for quadratic programming Excellent solution.
R (x, y)=a ' (x-x0)2+b′(y-y0)2+c′ (7)
ϵ = Σ x , y [ R ( x , y ) - r ( x , y ) ] 2 - - - ( 8 )
Wherein, (x y) is the positive sample of a certain characteristic point that matching obtains, due to the positive sample of a certain characteristic point to r Therefore this obtained positive sample and the most just obtained the coordinate of this feature point centered by this feature point Position.
In actual fit procedure, it is possible to there will be that matching obtains a certain meets above-mentioned target letter The characteristic point of number is not actually the situation of human face characteristic point, in order to avoid sending out of this erroneous judgement situation Raw, the present invention introduces the position limitation for characteristic point, i.e. matching in above-mentioned fit procedure and obtains Position (the x of characteristic pointi, yi) should meet formula (9), wherein, i represents ith feature point:
f ( x , y ) = Σ i = 1 n r i ( x i , y i ) - β Σ j = 1 k p j 2 e j - - - ( 9 )
Wherein, (x, y) coordinate function of the characteristic point that expression matching obtains, n is characterized number a little, k to f For described step S1 is mentioned to N number of face shape vector smAfter carrying out principal component analysis PCA The number of the characteristic vector obtained, pjRepresent the weights of individual features vector, ejRepresent individual features to Amount characteristic of correspondence value, β is the weight being manually set.
In order to save amount of calculation, (9) formula can be reduced to following expression:
f ( x ) = x T Hx - 2 F T x - β x T s nr s nr T x - - - ( 10 )
Wherein, x=[xi, yi]TThe new coordinate vector being traced to for the result after matching, i.e. characteristic point, H=diag (H1, H2... Hi..., Hn), Hi=diag (a 'i, b 'i), a 'i、b′iGinseng in (7) formula of being respectively Number;F=[F1, F2..., Fn]T, wherein, i is characterized ordinal number a little, and n is characterized number a little;Fi=[aix0i, biy0i]T, (x0, y0) it is that (x y) has the coordinate points of maximum for the response image R of individual features point; s nr = [ s 1 / e 1 , s 2 / e 2 , . . . , s k / e k ] T , s1..., skFor front k obtained after PCA Characteristic vector, e1…ekRepresent individual features vector characteristic of correspondence value.
(10) formula is a quadratic equation about x, thus can be in the hope of a unique x so that (10) formula obtains maximum.
Step S43, the initial human face posture parameter group obtained based on described step S3, according to plan Close the human face posture parameter group after the position of characteristic point obtained is corrected.
This step belongs to prior art, and therefore not to repeat here.
It follows that need to the critical point detection described in described step S2, described step S3 to be retouched The human face posture parameter group stated obtain with described step S4 described in CLM face tracking merge, Mutually carry out study face is tracked, to improve the robustness of face tracking.
Step S5, based on the attitude parameter of face in each frame facial image that described step S3 obtains Group and described step S4 follow the tracks of the position of characteristic point in each frame facial image obtained, and treat described Each frame facial image in track human faces video carries out following the tracks of again of face;
According to described above, described face video to be tracked each frame face figure can be gone out with primary Calculation The attitude parameter group (R, T) of face in Xiang, this parameter group can be as during CLM face tracking Initial parameter, utilizes Critical point model B to carry out initialization for CLM face tracking and is primarily due to R represents the angle of face, utilizes R can select the CLM of correct angle when CLM instantiation Model and adjust initialized face shape.CLM model includes the model of multiple angle, such as 3 classes can be comprised: positive surface model, left surface model and right flank model.For face to be tracked, Determined the angle of face by R, just can utilize and this angle CLM Model instantiation when That class model of degree coupling carries out instantiation.
Described step S5 further includes steps of
Step S51, obtains described face video to be tracked according to described step S2 and step S3 a certain Key point in frame facial image and initial human face posture parameter group;
Step S52, based on human face posture parameter group (R, T) initial in this frame facial image, root According to the description of described step S4, the most upwards (forward direction can be such as current frame image Next frame image direction, backward can be such as the previous frame image direction of current frame image) to described Facial image in face video to be tracked carries out CLM face tracking, obtains each frame facial image The position of middle key point, and then corrected according to described initial human face posture parameter group (R, T) After human face posture parameter group, and utilize the human face posture parameter group after described correction to described key point Model B is updated;
The step being updated described Critical point model B further includes steps of
Step S521, for current frame image, judges according to its human face posture parameter group (R, T) The attitude of face;
Step S522, describes sub-F according to the key point in current frame imagei-(fi, xi, yi) and key point mould The match condition of type B updates described Critical point model B, particularly as follows:
If the great majority of current frame image (such as 80%) key point all with described Critical point model B In key point coupling, then those key points not matched in present frame are added to described key In point model B, the described Critical point model B started most is carried out based on positive face figure, through constantly Supplement after can comprise the key point of side;The most do not carry out the supplementary renewal of described Critical point model B, Because now the key point of current frame image may includes a lot of flase drop key point out.
Certainly, after not every two field picture carries out CLM face tracking, Critical point model B will be carried out Renewal, in a practical situation, can carry out once every several frames.
Step S53, utilizes the Critical point model B after updating according to described step S3 and step S4 Describe and each frame facial image in described face video to be tracked is carried out following the tracks of again of face.
When described step S52 carries out CLM face tracking, can be every several frames just according to described step The description of rapid S51 initializes for human face posture parameter group, and do so can be to CLM face Corrective action is played in tracking, because if use always the testing result of former frame to carry out initialization can Can cause the accumulation of mistake, cause following the tracks of and lose.
In order to further enhance effect, the present invention additionally uses the mode of on-line study, i.e. step S6, To utilize existing testing result to carry out learning training at any time during following the tracks of so that model and follow the tracks of Combine together, it is possible to model A described in online updating, compare traditional machine-learning process, i.e. learn The pattern being separated with application, has breakthrough.So, the fusion introduced in conjunction with previous step with Track method, it is possible to make the tracking effect more robust of whole face tracking system, even for bigger The rotation of angle also is able to obtain comparatively ideal tracking effect.
Step S6, uses increment PCA (Incremental PCA) method to carry out described model A Update, and utilize model A repeating said steps S1-S5 after renewal, obtain final face tracking Result.
From introduction above, described model A is divided into shape and texture model two parts, connects down As a example by shape, this step is illustrated, for purpose of brevity, the most still use model A to come Statement.
For described model A, from (2) formula, it is average face shape s0With series of standards Orthogonal basis siThe result of linear combination.By this series of standards orthogonal basis siComposition set A ', and collection Close A '={ s1, s2... su, and average face shape s0Then for the arithmetic mean of instantaneous value of the set middle element of A '. Separately set a set B ', the inside accumulated when placing on-line study according to human face posture parameter group (R, T) The coordinate vector of obtained human face characteristic point, it comprises the s in set A 'i(i ∈ 1,2 ..., u}), It is to say, the s in set A 'iObtained when being off-line modeling, and gathered the Partial Elements in B ' su+1..., su+mAdd set B ' in real time when updating described Critical point model B, its arithmetic is put down Average is set to s '0, m is the number of the set middle vector of B ', and n is the number of described human face characteristic point. It addition, the vectorial number added in described set B ' can set, the most both according to using needs Ensure that the accuracy of online updating, the most also need not all carry out incremental learning by each frame, thus prominent Ageing.
The step being so updated model A further includes steps of
Step S61, calculation expression A '-s0Singular value decomposition SVD, obtain U ∑ VT, wherein A '-s0 Refer to that each column vector in set A ' is all and s0Subtract each other the matrix of gained;
Step S62, constructs an augmented matrixWherein, s '0For collection Closing the arithmetic mean of instantaneous value of B ', m is the number of the set middle vector of B ', and n is described human face characteristic point Number, and be calculated according to described augmented matrixWith R:orth(·) Representing matrix orthogonalization, available QR decomposition draws, R = Σ U T B ^ 0 B ~ T ( B ^ - U U T B ^ ) ;
Step S63, calculates singular value decomposition SVD of R, obtains
Step S64, is calculated U ′ = U B ~ U ~ , Then U ' is one group of new orthogonal basis, it is also possible to Be not understood as updated after set A ', and then the model A after being updated;
Fig. 2-Fig. 4 is the result schematic diagram being tracked different faces according to the inventive method.Its In, Fig. 2 is to align the result schematic diagram that face is tracked;Fig. 3 is to the face that there is Small-angle Rotation The tracking result schematic diagram in portion;Fig. 4 is to the tracking result signal that there is the face that larger angle rotates Figure, the figure a in above each figure is all the tracking result not carrying out online incremental learning, and figure b is for entering Gone online incremental learning follow the tracks of results, it can be seen that have employed the face of online incremental learning with Track effect is better than the face tracking effect not carrying out online incremental learning.
In order to highlight the impact for model A of the online incremental learning, namely in view of from present frame more close to Image more can affect tracking effect, and hence it is also possible to optionally carrying out online incremental learning Time add forgetting factor f:
R = fΣ U T B ^ 0 B ~ T ( B ^ - U U T B ^ ) ,
Wherein, described forgetting factor f is an empirical value, and its span is 0~1.
To sum up, present invention incorporates Face datection and CLM algorithm, and add based on online increment The principal component analysis of study, thus it is greatly improved the overall robustness of system, and ensure that simultaneously The real-time of system.Specifically, the present invention utilizes critical point detection to be estimated for face angle, The selection gist of model angle when modeling in this, as CLM, meanwhile, on the one hand utilizes CLM to detect Result constantly corrects key point Sample Storehouse, on the other hand utilizes online real-time update CLM of testing result Model so that the acquisition that model is static when being only no longer study, but dynamically tight with current state Close connected, thus solve when carrying out CLM face tracking, due to interframe adjacent in target image Occur when changing big follows the tracks of the problem lost, and improves the precision of tracking.
Particular embodiments described above, is carried out the purpose of the present invention, technical scheme and beneficial effect Further describe, be it should be understood that the foregoing is only the present invention specific embodiment and , be not limited to the present invention, all within the spirit and principles in the present invention, that is done any repaiies Change, equivalent, improvement etc., should be included within the scope of the present invention.

Claims (13)

1. feature based on online increment principal component analysis mates a face tracking method mutually with model, It is characterized in that, the method comprises the following steps:
Step S1, carries out off-line modeling for several facial images, obtains including shape s and stricture of vagina Reason model wTModel Matching CLM model A;
Step S2, inputs a face video to be tracked, each in described face video to be tracked Frame facial image carries out critical point detection, by the set of obtained all key points and these key points Robust describe subgroup and be together as Critical point model B;
Step S3, the Critical point model B obtained based on described step S2, for described people to be tracked Each frame facial image in face video carries out key point coupling, it is thus achieved that initial in each frame facial image Human face posture parameter group (R, T), wherein, R represents that angle parameter and T represent displacement parameter;
Step S4, utilizes described model A that described face video to be tracked is carried out CLM face tracking, Obtain the position of characteristic point in described face video to be tracked each frame facial image;
Step S5, based on the attitude parameter of face in each frame facial image that described step S3 obtains Group and described step S4 follow the tracks of the position of characteristic point in each frame facial image obtained, and treat described Each frame facial image in track human faces video carries out following the tracks of again of face;
Step S6, uses increment PCA method to be updated described model A, and after utilizing renewal Model A repeating said steps S1-S5, obtain final face tracking result;
Described step S5 further includes steps of
Step S51, obtains described face video to be tracked according to described step S2 and step S3 a certain Key point in frame facial image and initial human face posture parameter group;
Step S52, based on human face posture parameter group (R, T) initial in this frame facial image, root According to the description of described step S4, the most upwards to the face in described face video to be tracked Image carries out CLM face tracking, obtains the position of key point, Jin Ergen in each frame facial image Human face posture parameter group after being corrected according to described initial human face posture parameter group (R, T), and Utilize the human face posture parameter group after described correction that described Critical point model B is updated;
Step S53, utilizes the Critical point model B after updating according to described step S3 and the description of S4 Each frame facial image in described face video to be tracked is carried out following the tracks of again of face;
In described step S52, the step that described Critical point model B is updated farther include with Lower step:
Step S521, for current frame image, judges according to its human face posture parameter group (R, T) The attitude of face;
Step S522, according in current frame image key point describe son and Critical point model B Situation of joining is to update described Critical point model B:
If most of key points of current frame image all with the key point in described Critical point model B Coupling, then add to those key points not matched in present frame in described Critical point model B; The most do not carry out the supplementary renewal of described Critical point model B.
Method the most according to claim 1, it is characterised in that described step S1 is wrapped further Include following steps:
Step S11, according to pre-determined common facial contour to several facial images described respectively Carry out demarcation and obtain multiple feature point for calibration, and set up according to the coordinate figure of the plurality of feature point for calibration Obtain a face shape model s;
Step S12, each feature point for calibration obtained based on described step S11, study obtains energy Enough stricture of vaginas embodying the textural characteristics with a certain size region corresponding to each feature point for calibration described Reason model wT
Method the most according to claim 2, it is characterised in that described step S11 is further Including:
Step S111, gather N open facial image, for every facial image all according to described jointly Facial contour artificially demarcate, obtain multiple feature point for calibration, and then obtain N number of face shape Vector sm:
sm=(x1, y1, x2, y2..., xn, yn)T,
Wherein, m represents that the m that N opens in facial image opens facial image, xi, yiIt is respectively corresponding face The coordinate figure of i-th feature point for calibration in image, n is the number of described feature point for calibration;
Step S112, with an average face shape s0The face shape vector s orthogonal with uiEnter Line linearity combines and obtains described face shape model s:
s = s 0 + Σ i = 1 u p i s i ,
Wherein, average face shape s0For N number of face shape vector smAverage, piIt is form parameter, Its value is to described N number of face shape vector smU the feature obtained after carrying out principal component analysis to Amount siCorresponding weights.
Method the most according to claim 3, it is characterised in that enter in described step S112 Before row principal component analysis, to described N number of face shape vector smCarry out Proust analysis respectively, To reduce kinematic error.
Method the most according to claim 2, it is characterised in that described step S12 is further Comprise the following steps:
Step S121, with each feature point for calibration of each facial image that described step S11 obtains Centered by, take region that size is r × r as positive sample, in respective image any other Position intercepts multiple an equal amount of regions as negative sample;
Step S122, based on the sample group corresponding to each feature point for calibration, utilizes support vector machine to obtain To the texture model w that each feature point for calibration is correspondingT
Method the most according to claim 5, it is characterised in that in described step S122:
First, each sample in the sample group corresponding to each feature point for calibration is write following form as:
x ( i ) = [ x 1 ( i ) , x 2 ( i ) , ... , x r × r ( i ) ] T ,
Wherein, (i) represents the sequence number of sample,Pixel value for position a certain in respective sample;
Then, SVM is utilized to obtain the texture model w that each feature point for calibration is correspondingT:
y(i)=wT·x(i)+ θ,
Wherein, y(i)For the output of SVM, wTIt is the texture model learning to obtain: wT=[w1 w2 … wr×r], θ is the side-play amount of SVM, for corresponding to each feature point for calibration Positive sample, y(i)=1;Negative sample, y(i)=0.
Method the most according to claim 1, it is characterised in that the key in described step S2 Point detection comprises study and the study of its robust description of key point, and the study of described key point enters one Step comprises the following steps:
Step S21, for each two field picture in described face video to be tracked, primary Calculation obtains Multiple key points;
Step S22, selects from multiple key points that described step S21 tentatively obtains and has invariance Key point, and the invariance key point of all images in described face video to be tracked is combined Obtain the description son (f of set of keypoints and these key pointsi, xi, yi), wherein, fiRepresent that i-th is closed The eigenvalue of key point, (xi, yi) represent the coordinate of this key point;
The robust of described key point describes the study of son and further includes steps of
Step S23, n the invariance key point described step S22 obtained forms the first key point Set;
Step S24, carries out attitude rotation to the face in each two field picture in described face video to be tracked Turn, a kind of parameter transformation in expression shape change, illumination variation, emulation converted after image;
Step S25 is many by detecting the image after the conversion obtained according to described step S21 and S22 Individual invariance key point forms the second set of keypoints;
Step S26, to each some p in the second set of keypoints, the first set of keypoints is retouched State sub-matching operation;
Step S27, for image repeating said steps S24 to the S26 after other parameter transformations, The complete key point robust finally giving each two field picture in described face video to be tracked describes son.
Method the most according to claim 7, it is characterised in that described step S26 particularly as follows:
In the first set of keypoints, find the some q nearest with putting p position, it is judged that some p and some q Back projection is to three-dimensional point p on threedimensional model surface, front ' and q ' whether be same three-dimensional point, if p ' And q ' be same three-dimensional point and p description describing son and q closest to, then the description of p Son joins in description of q;If p ' and q ' is description and first of same three-dimensional point and some p In set of keypoints, description of another x of non-q is closest to, then some q and describe sub-nothing Effect;If p ' and q ' is that description of same three-dimensional point and some p is each with in the first set of keypoints Description of point is the most different, then joined in the first set of keypoints by description of p and p;As Really p ' and q ' is not description and certain some s in the first set of keypoints of same three-dimensional point and p Certain describe son very close to, then by a s and describe son remove from the first set of keypoints;If P ' and q ' is not description and each key point in the first set of keypoints of same three-dimensional point and p Describe son the most different, then a p and description thereof are joined in the first set of keypoints.
Method the most according to claim 1, it is characterised in that described step S3 is wrapped further Include following steps:
Step S31, obtains a certain frame facial image of described face video according to described step S2 The key point of a upper facial image frame, and in this frame facial image, find certain of a facial image frame Key point near one key point relevant position in the current frame;
Step S32, by the sub and described Critical point model B of description of key point described in present frame Describe son to mate, by what present frame matched with description in described Critical point model B 3D key point composition set V, will mate with description in described Critical point model B in present frame On 2D key point composition set u, the 2D that described set V is obtained after plane projection pass Key point composition set u ', compares u ' and u, is made in the minimum current frame image of | | u-u ' | | initial Face relative to the attitude parameter group (R, T) of positive face:
( R , T ) * = arg min ( R , T ) Σ i N k | | K [ R | T ] V i - u i | | 2 2 ,
Wherein, K is camera parameters, and R is angle matrix, and T is motion vector, [R | T] it is by R and T The augmented matrix of composition, i is the sequence number of key point, NkFor set V and the key point number of set u Maximum.
Method the most according to claim 1, it is characterised in that described step S4 is further Comprise the following steps:
Step S41, carries out face to a certain present frame facial image in described face video to be tracked Detection, obtains n initial characteristic point, and the size being correspondingly made available each characteristic point is r*r's (x, y), wherein, (x y) is w to the response image R of each characteristic point to response image RT·x(i)After matrixing Result, wTIt is the texture model of this feature point tried to achieve according to support vector machine: wT= [w1 w2 … wr×r], x(i)The sample that i-th size is r*r for this feature point;
Step S42, (x y), obtains described face to be tracked by matching to utilize described response image R Video each frame facial image has the characteristic point of same implication with described present frame facial image Position;
Step S43, the initial human face posture parameter group obtained based on described step S3, according to plan Close the human face posture parameter group after the position of characteristic point obtained is corrected.
11. methods according to claim 10, it is characterised in that described step S42 enters one Step comprises the following steps:
Step S421, to each feature in all characteristic points to obtain in described step S41 Size centered by Dian is that the region of r × r carries out the search that scope is a × a, for each characteristic point, Obtain one centered by it, the square area of a diameter of (r+a);
Step S422, finds the next frame of present frame or previous frame image by the way of Function Fitting Described square area has the coordinate of the characteristic point of same implication with described present frame facial image Position.
12. methods according to claim 1, it is characterised in that described model A is carried out more New step further includes steps of
Step S61, calculation expression A '-s0Singular value decomposition, obtain U ∑ VT, wherein, A ' For by texture model w in described model ATSeries of standards orthogonal basis siThe set of composition, s0Represent Average face shape in described model A;
Step S62, constructs an augmented matrixWherein, B ' is for putting The set of the coordinate vector of the human face characteristic point obtained by being equipped with according to human face posture parameter group (R, T), s0' for gathering the arithmetic mean of instantaneous value of B ', m is the number of the set middle vector of B ', and n is that described face is special Levy number a little, and be calculated according to described augmented matrixWith R: Orth () representing matrix orthogonalization;
Step S63, calculates the singular value decomposition of R, obtains
Step S64, is calculated one group of new orthogonal basisAnd utilize obtain new Model A is updated by orthogonal basis.
13. methods according to claim 12, it is characterised in that the expression formula of R is come by following formula Replace:
R = fΣ U T B ^ 0 B ~ T ( B ^ - UU T B ^ ) ,
Wherein, f is forgetting factor, and it is an empirical value that span is 0~1.
CN201310267907.XA 2013-06-28 2013-06-28 Feature based on increment principal component analysis mates face tracking method mutually with model Active CN103310204B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310267907.XA CN103310204B (en) 2013-06-28 2013-06-28 Feature based on increment principal component analysis mates face tracking method mutually with model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310267907.XA CN103310204B (en) 2013-06-28 2013-06-28 Feature based on increment principal component analysis mates face tracking method mutually with model

Publications (2)

Publication Number Publication Date
CN103310204A CN103310204A (en) 2013-09-18
CN103310204B true CN103310204B (en) 2016-08-10

Family

ID=49135400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310267907.XA Active CN103310204B (en) 2013-06-28 2013-06-28 Feature based on increment principal component analysis mates face tracking method mutually with model

Country Status (1)

Country Link
CN (1) CN103310204B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014205768A1 (en) * 2013-06-28 2014-12-31 中国科学院自动化研究所 Feature and model mutual matching face tracking method based on increment principal component analysis
CN104036546B (en) * 2014-06-30 2017-01-11 清华大学 Method for carrying out face three-dimensional reconstruction at any viewing angle on basis of self-adaptive deformable model
CN106815547A (en) * 2015-12-02 2017-06-09 掌赢信息科技(上海)有限公司 It is a kind of that method and the electronic equipment that standardized model is moved are obtained by multi-fit
CN107169397B (en) * 2016-03-07 2022-03-01 佳能株式会社 Feature point detection method and device, image processing system and monitoring system
CN107292218A (en) * 2016-04-01 2017-10-24 中兴通讯股份有限公司 A kind of expression recognition method and device
CN106339693A (en) * 2016-09-12 2017-01-18 华中科技大学 Positioning method of face characteristic point under natural condition
CN106781286A (en) * 2017-02-10 2017-05-31 开易(深圳)科技有限公司 A kind of method for detecting fatigue driving and system
CN108734735B (en) * 2017-04-17 2022-05-31 佳能株式会社 Object shape tracking device and method, and image processing system
CN107066982A (en) * 2017-04-20 2017-08-18 天津呼噜互娱科技有限公司 The recognition methods of human face characteristic point and device
CN108304758B (en) 2017-06-21 2020-08-25 腾讯科技(深圳)有限公司 Face characteristic point tracking method and device
CN108256479B (en) * 2018-01-17 2023-08-01 百度在线网络技术(北京)有限公司 Face tracking method and device
CN108427918B (en) * 2018-02-12 2021-11-30 杭州电子科技大学 Face privacy protection method based on image processing technology
CN108460829B (en) * 2018-04-16 2019-05-24 广州智能装备研究院有限公司 A kind of 3-D image register method for AR system
CN108764048B (en) * 2018-04-28 2021-03-16 中国科学院自动化研究所 Face key point detection method and device
CN109800635A (en) * 2018-12-11 2019-05-24 天津大学 A kind of limited local facial critical point detection and tracking based on optical flow method
CN111340043A (en) * 2018-12-19 2020-06-26 北京京东尚科信息技术有限公司 Key point detection method, system, device and storage medium
CN111523345B (en) * 2019-02-01 2023-06-23 上海看看智能科技有限公司 Real-time human face tracking system and method
CN110008673B (en) * 2019-03-06 2022-02-18 创新先进技术有限公司 Identity authentication method and device based on face recognition
CN110415270B (en) * 2019-06-17 2020-06-26 广东第二师范学院 Human motion form estimation method based on double-learning mapping incremental dimension reduction model
CN110544272B (en) * 2019-09-06 2023-08-04 腾讯科技(深圳)有限公司 Face tracking method, device, computer equipment and storage medium
CN111126159A (en) * 2019-11-28 2020-05-08 重庆中星微人工智能芯片技术有限公司 Method, apparatus, electronic device, and medium for tracking pedestrian in real time
CN110990604A (en) * 2019-11-28 2020-04-10 浙江大华技术股份有限公司 Image base generation method, face recognition method and intelligent access control system
CN111507304B (en) * 2020-04-29 2023-06-27 广州市百果园信息技术有限公司 Self-adaptive rigidity priori model training method, face tracking method and related devices

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1987890A (en) * 2005-12-23 2007-06-27 北京海鑫科金高科技股份有限公司 Humanface image matching method for general active snape changing mode
CN102214291A (en) * 2010-04-12 2011-10-12 云南清眸科技有限公司 Method for quickly and accurately detecting and tracking human face based on video sequence
CN102999918A (en) * 2012-04-19 2013-03-27 浙江工业大学 Multi-target object tracking system of panorama video sequence image
CN103164693A (en) * 2013-02-04 2013-06-19 华中科技大学 Surveillance video pedestrian detection matching method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3763215B2 (en) * 1998-09-01 2006-04-05 株式会社明電舎 Three-dimensional positioning method and apparatus, and medium on which software for realizing the method is recorded

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1987890A (en) * 2005-12-23 2007-06-27 北京海鑫科金高科技股份有限公司 Humanface image matching method for general active snape changing mode
CN102214291A (en) * 2010-04-12 2011-10-12 云南清眸科技有限公司 Method for quickly and accurately detecting and tracking human face based on video sequence
CN102999918A (en) * 2012-04-19 2013-03-27 浙江工业大学 Multi-target object tracking system of panorama video sequence image
CN103164693A (en) * 2013-02-04 2013-06-19 华中科技大学 Surveillance video pedestrian detection matching method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Model Transduction for Triangle Meshes;Huai-Yu Wu 等;《JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY》;20100531;第25卷(第3期);583-594 *
结合模型匹配与特征跟踪的人体上半身三维运动姿态恢复方法;陈姝;《计算机辅助设计与图形学学报》;20121130;第24卷(第11期);1455-1463,1470 *

Also Published As

Publication number Publication date
CN103310204A (en) 2013-09-18

Similar Documents

Publication Publication Date Title
CN103310204B (en) Feature based on increment principal component analysis mates face tracking method mutually with model
CN101499128B (en) Three-dimensional human face action detecting and tracing method based on video stream
CN106055091A (en) Hand posture estimation method based on depth information and calibration method
CN110163110A (en) A kind of pedestrian's recognition methods again merged based on transfer learning and depth characteristic
WO2014205768A1 (en) Feature and model mutual matching face tracking method based on increment principal component analysis
CN109993774A (en) Online Video method for tracking target based on depth intersection Similarity matching
CN101777116B (en) Method for analyzing facial expressions on basis of motion tracking
CN105869178B (en) A kind of complex target dynamic scene non-formaldehyde finishing method based on the convex optimization of Multiscale combination feature
CN104167016B (en) A kind of three-dimensional motion method for reconstructing based on RGB color and depth image
CN109949375A (en) A kind of mobile robot method for tracking target based on depth map area-of-interest
CN107958479A (en) A kind of mobile terminal 3D faces augmented reality implementation method
CN102087703B (en) The method determining the facial pose in front
CN106153048A (en) A kind of robot chamber inner position based on multisensor and Mapping System
CN105512621A (en) Kinect-based badminton motion guidance system
CN107203753A (en) A kind of action identification method based on fuzzy neural network and graph model reasoning
CN103514442B (en) Video sequence face identification method based on AAM model
CN110176016B (en) Virtual fitting method based on human body contour segmentation and skeleton recognition
CN110321833A (en) Human bodys' response method based on convolutional neural networks and Recognition with Recurrent Neural Network
CN106066696A (en) The sight tracing compensated based on projection mapping correction and point of fixation under natural light
CN110268444A (en) A kind of number of people posture tracing system for transcranial magnetic stimulation diagnosis and treatment
CN102184541A (en) Multi-objective optimized human body motion tracking method
CN103810491A (en) Head posture estimation interest point detection method fusing depth and gray scale image characteristic points
CN103440510A (en) Method for positioning characteristic points in facial image
CN106570460A (en) Single-image human face posture estimation method based on depth value
CN102156992A (en) Intelligent simulating method for passively locating and tracking multiple targets in two stations

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant