CN104036229A - Regression-based active appearance model initialization method - Google Patents

Regression-based active appearance model initialization method Download PDF

Info

Publication number
CN104036229A
CN104036229A CN201310090347.5A CN201310090347A CN104036229A CN 104036229 A CN104036229 A CN 104036229A CN 201310090347 A CN201310090347 A CN 201310090347A CN 104036229 A CN104036229 A CN 104036229A
Authority
CN
China
Prior art keywords
point
iter
facial image
calculate
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310090347.5A
Other languages
Chinese (zh)
Inventor
陈莹
化春键
郭修宵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN201310090347.5A priority Critical patent/CN104036229A/en
Publication of CN104036229A publication Critical patent/CN104036229A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a regression-based active appearance model initialization method, and belongs to the computer vision field. The realization process of the method is as follows: when human face characteristic point automatic tracking is carried out by use of an active appearance model, assuming object position information of a first frame in known video tracking, during a subsequent tracking process, by use of a double-threshold characteristic corresponding algorithm, obtaining corresponding discrete characteristics between neighboring frame images, and by use of a space mapping relation between the discrete characteristic points and structured calibration points, established through a nucleus ridge regression algorithm, obtaining initial calibration of human face characteristics, such that the subsequent iteration frequency can be substantially reduced, and at the same time, the calibration precision is improved. Compared to an initialization method of a conventional active appearance model, more accurate human face characteristic point calibration results can be obtained by use of the auxiliary active appearance model.

Description

Active appearance models initial method based on returning
Technical field
The invention belongs to image analysis technology field, specifically, belong to a kind of active appearance models initial method based on returning.
Background technology
In computer vision research field, utilize active appearance models (Active Appearance Model, AAM) to carry out target shape positioning feature point be the focus of paying close attention in recent years and studying to method, it was proposed by people such as Edwards first in 1998, and was widely used in the registration of people's face and other non-rigid bodies and identification.AAM algorithm is a kind of improvement to Active Shape Model Method (Active Shape Model, ASM), compares with ASM, and it has considered the constraint of global information, and the statistical restraint that adopts shape and texture to merge, adds up apparent constraint.And the search principle of AAM has been used for reference the main thought of the analytical technology (analysis-by-synthesis, ABS) based on synthetic, by making model approach gradually actual input model to the continuous adjustment of Model Parameter.
Although AAM is effective, no matter be which kind of improves algorithm, the Fitting efficiency of its algorithm and precision all have close relationship with model initial position given.If initial position is poor, in AAM, the fitting algorithm based on Gradient Descent will very likely be absorbed in local minimum and obtain the target shape positioning result of non-constant.Therefore the given of initial characteristics point is the key factor that affects algorithm robustness and speed, and the automatic accurate demarcation that can automatically carry out human face characteristic point can make the efficiency of algorithm and degree of accuracy greatly improve.Traditional AAM initial method comprises: (1) utilizes average face as original shape, and initialization error is larger; (2) utilize face characteristic (as eyes, mouth) locating information to complete initialization, but setting accuracy is had higher requirements; (3), in video tracking, before utilizing, frame alignment result, as the initial information of present frame location, changes less situation but can only adapt to interframe.
Summary of the invention
The object of the invention is to: for the deficiency of existing active appearance models initial method, take human face characteristic point video tracking as research object, proposed a kind of active appearance models initial method based on returning.Because method has been considered interrelated between frame, the present invention can greatly improve the tracking performance of target shape under the tracking environmental of frame losing.
Technical scheme solved by the invention is: algorithm is supposed the key feature points of known the first frame, first utilize dual threshold method to obtain former frame corresponding with accurate local feature point in current frame image, recycling core ridge regression algorithm (Kernel Ridge Regression, KRR) the discrete local feature point correspondence of setting up and the mapping relations between structuring calibration point are extracted the calibration point locating information of present frame from local feature correspondence.The specific implementation step of technical solution of the present invention is as follows:
1. selected training video, utilizes core ridge regression algorithm to set up the mapping M of locus between local feature point at random and face characteristic calibration point v;
2. utilize Cascade Adaboost algorithm to detect people's face, and facial image is normalized to 250*250 size;
3. before calculating, frame is rebuild the error between image and current reconstruction image i wherein 1and I 2be respectively front frame facial image and current facial image, x is average shape s 0under pixel set, p is from average shape s 0to the deformation parameter of current reconstruction shape s, W (x; P) for rebuilding the pixel set under shape s, N is the number of pixel under average shape.As e > e 0time, before illustrating, frame facial image and current facial image difference are larger, proceed to step (3), otherwise, proceed to step (5), wherein e 0=5e-4 is error threshold.
4. frame facial image I before extracting 1with present frame facial image I 2sIFT feature, utilize the feature matching method based on dual threshold to mate, obtain coupling to C={ (c k, c ' k), k=1,2 ..., q c, q wherein cfor coupling is to number;
5. at front frame facial image I 1middle extraction space vector V={V k, k=1,2 ..., n}, wherein n is human face characteristic point number.
6. according to the mapping M obtaining in step (1) vthe match point obtaining in parameter and step (4), sets up test phase dispersed feature point locus vector V and sends into mapping M as input v, export people's face calibration point of answering in contrast, obtain following the tracks of for present frame the initial value of active apparent model used;
The above-mentioned active appearance models initial method based on returning, the specific implementation process in step 1 is as follows:
(1) data initialization, order is k=0 constantly;
(2) at front frame facial image I 1(k) with current facial image I 2(k) between, by setting up the matching process of equalization probability model, obtain match point at random, make k=k+1; ;
(3), according to match point at random, in frame facial image and current facial image, obtain KRR training data in the past T V j ( k ) = { ( V k j , V k ′ j ) , j = 1,2 , . . . , m } , Wherein m=66 is calibration point number;
(4) if k < is n, return to (1.2), otherwise obtain total training data wherein n=100 is training sample logarithm.
(5) to everyone face calibration point j, first according to training data calculate kernel matrix K j, wherein K j ( V k 1 j , V k 2 j ) = exp ( - | | V k 1 j - V k 2 j | | 2 / &sigma; ) , k 1 = 1,2 , . . . n , k 2 = = 1,2 , . . . n , σ=0.025 wherein; Then create size and matrix K jidentical unit matrix I, wherein I (k 1, k 2)=1, k 1=1,2 ... n, k 2==1,2 ... n; Calculate afterwards core coefficient vector α j, wherein λ=0.5 * 10 wherein -7; Finally, according to above-mentioned calculating, obtain returning kernel function
(6) obtain mapping set M v={ f j(V), j=1,2 ..., m}.
The above-mentioned active appearance models initial method based on returning, the specific implementation process of step 4 is as follows
(1) dual threshold initialization: establish threshold value initial value η 1=1.5, η 2=8; Cycle index iter 1=0, iter 2=0; Cycle index restriction iter_max 1=10, iter_max 2=20, proximity matching point number t=5;
(2) the past frame facial image I 1with current facial image I 2middle extraction SIFT feature;
(3) utilize threshold value η 1do SIFT characteristic matching, and make iter 1=iter 1+ 1;
(4) if coupling number is less than t, and iter 1< iter_max 1, make η 11+ 0.15, and return to (4.3); If coupling number is greater than t, or iter 1> iter_max 1, proceed to (4.5);
(5) utilize threshold value η 2do SIFT characteristic matching, and make iter 2=iter 2+ 1;
(6) if coupling number is less than 2, and iter 2< iter_max 2, make η 22+ 0.02, and return to (4.5); If coupling number is greater than 2 and be less than 5, and iter 2< iter_max 2, make η 22-0.01, and return to (4.5); Otherwise, proceed to (4.7);
(7) obtain and utilize η 1the dense Stereo Matching collection B={ (b obtaining j, b ' j), j=1,2 ..., q b, q wherein bfor the number of dense Stereo Matching, and utilize η 2the exact matching collection A={ (a obtaining i, a ' i), i=1,2 ..., q a, q wherein anumber for dense Stereo Matching;
(8) calculate the distance ratio of exact matching concentrated first and last two pairs of match points
(9) the every a pair of coupling (b concentrating for dense Stereo Matching j, b ' j), first calculate then calculate and obtain calculate afterwards θ=atan ((a t(y)-b j(y))/(a t(x)-b j(x))), θ '=atan ((a ' t(y)-b ' j(y))/(a t(x)-b j)), and calculate sig (x) x=sign (a t(y)-b j(y)), sig y=sign (a t(x)-b j(x)), sig ' x=sign (a ' t(y)-b ' j(y)), sig y=sign (a ' t(x)-b ' j(x)).If preserve current matching, otherwise this coupling is deleted.Finally obtain finally mating C={ (c k, c ' k), k=1,2 ..., q c, q wherein cnumber for final coupling.
In the above-mentioned active appearance models initial method based on returning, (3) step by step in step 1 are performed as follows:
(1) establishing j is I 1(k) the current calibration point in, o is I 1(k) current t proximity matching dot center in, o ' is the I corresponding with it 2(k) current t match point center in;
(2) calculate distance from match point i (value of i is match point number) to the central point o of match point and the angle (d straight line oi and x axle i, θ i), and the angle the distance of the central point o ' from match point i ' to match point and straight line o ' i ' and x axle (d ' i, θ ' i);
(3) calculate I 1(k) angle (d the distance from calibration point j to central point o in and straight line oj and x axle l, θ l);
(4) calculate I 2(k) angle (d the distance from calibration point j ' to central point o ' in and straight line o ' j ' and x axle r, θ r);
(5) form the input training data V with respect to calibration point p j=(d 1, θ 1..., d 6, θ 6, d ' 1, θ ' 1..., d ' 6, θ ' 6, d l, θ l) and export accordingly training data V j=(Δ d, Δ θ) forms k training sample constantly Δ d=d wherein r/ d l, Δ θ=θ rl;
(6) j and j ' are added to match point set as new match point, continuous iterative loop, until travel through all calibration points.
The inventive method compared with prior art, has following outstanding substantive distinguishing features and remarkable advantage:
(1) in forefathers' face automatic Calibration technology, initial attitude being had relatively high expectations, iterations is many, demarcate slow-footed shortcoming, utilize training data to obtain the spatial relation between dispersed feature point and structuring calibration point, thereby according to the corresponding initialization that obtains the demarcation of present frame people face of online dispersed feature point;
(2) adopt core Ridge Regression Modeling Method to obtain the mapping function between dispersed feature point and structuring calibration point, in regression accuracy and speed, reached good balance;
(3), by setting up the mapping relations between dispersed feature point and structuring calibration point, can, according to the corresponding initialization that obtains people's face calibration point of online dispersed feature point, can improve speed and the precision of final demarcation;
Active appearance models initial method provided by the invention can greatly improve location and the tracking accuracy of target shape, for the subsequent treatment of human face analysis provides characteristic point information more comprehensively and accurately, reaches desirable demarcation effect.At the civil areas such as intelligent video meeting, production of film and TV, public place safety monitoring and military field, all have wide practical use.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet that the present invention is based on the active appearance models initial method of recurrence.
Fig. 2 has provided the tracking results of sample frame image in FGNet Talking Face video.
Fig. 3 has provided the tracking error comparison of AAM tracking results and traditional AAM tracking results of the initial method based on proposed by the invention.
embodiment
Concrete diagram below in conjunction with in Fig. 1, is further elaborated the present invention.
With reference to the process flow diagram in figure 1, realization the present invention is based on the active appearance models initial method of recurrence, algorithm is first in the training stage, utilize core ridge regression algorithm (Kernel Ridge Regression, KRR) the discrete local feature point correspondence of setting up and the mapping relations between structuring calibration point, then at test phase, suppose the key feature points of known the first frame, utilize dual threshold method to obtain former frame corresponding with accurate local feature point in current frame image, the discrete local feature point correspondence that recycling training obtains and the mapping relations between structuring calibration point, from local feature correspondence, extract the calibration point locating information of present frame.Now each step embodiment is illustrated:
1. selected training video, utilizes core ridge regression algorithm to set up the mapping M of locus between local feature point at random and face characteristic calibration point v.The concrete steps of this process are:
(1) data initialization, order is k=0 constantly;
(2) at front frame facial image I 1(k) with current facial image I 2(k) between, by setting up the matching process of equalization probability model, obtain match point at random, make k=k+1; ;
(3), according to match point at random, in frame facial image and current facial image, obtain KRR training data in the past wherein m=66 is calibration point number; Specific implementation step is:
(a) establishing p is the current calibration point in positive face, and o is current k match point center in positive face, and o ' is current k match point center in the side face corresponding with it;
(b) calculate distance from match point i (value of i is match point number) to the central point o of match point and the angle (d straight line oi and x axle i, θ i), and the angle the distance of the central point o ' from match point i ' to match point and straight line o ' i ' and x axle (d ' i, θ ' i);
(c) calculate the angle (d the distance from calibration point p to central point o in positive face and straight line op and x axle l, θ l);
(d) angle (d the distance from calibration point p ' to central point o ' and straight line o ' p ' and x axle in calculation side face r, θ r);
(e) form the input training data N with respect to calibration point p x=(d 1, θ 1..., d 6, θ 6, d ' 1, θ ' 1..., d ' 6, θ ' 6, d l, θ l) and export accordingly training data N y=(Δ d, Δ θ), wherein Δ d=d r/ d l, Δ θ=θ rl;
(f) p and p ' are added to match point set as new match point, continuous iterative loop, until travel through all calibration points.
(4) if k < is n, return to (1.2), otherwise obtain total training data wherein n=100 is training sample logarithm.
(5) to everyone face calibration point j, first according to training data calculate kernel matrix K j, wherein k 1=1,2...n, k 2==1,2...n, wherein σ=0.025; Then create size and matrix K jidentical unit matrix I, wherein I (k 1, k 2)=1, k 1=1,2 ... n, k 2==1,2 ... n; Calculate afterwards core coefficient vector α j, wherein λ=0.5 * 10 wherein -7; Finally, according to above-mentioned calculating, obtain returning kernel function
(6) obtain mapping set M v={ f j(V), j=1,2 ..., m}.
2. utilize CascadeAdaboost algorithm to detect people's face, and facial image is normalized to 250*250 size;
3. before calculating, frame is rebuild the error between image and current reconstruction image i wherein 1and I 2be respectively front frame facial image and current facial image, x is average shape s 0under pixel set, p is from average shape s 0to the deformation parameter of current reconstruction shape s, W (x; P) for rebuilding the pixel set under shape s, N is the number of pixel under average shape.As e > e 0time, before illustrating, frame facial image and current facial image difference are larger, proceed to step (3), otherwise, proceed to step (5), wherein e 0=5e-4 is error threshold.
4. frame facial image I before extracting 1with present frame facial image I 2sIFT feature, utilize the feature matching method based on dual threshold to mate, obtain coupling to C={ (c k, c ' k), k=1,2 ..., q c, q wherein cfor coupling is to number.The concrete steps of this process are:
(1) dual threshold initialization: establish threshold value initial value η 1=1.5, η 2=8; Cycle index iter 1=0, iter 2=0; Cycle index restriction iter_max 1=10, iter_max 2=20, proximity matching point number t=5;
(2) the past frame facial image I 1with current facial image I 2middle extraction SIFT feature;
(3) utilize threshold value η 1do SIFT characteristic matching, and make iter 1=iter 1+ 1;
(4) if coupling number is less than t, and iter 1< iter_max 1, make η 11+ 0.15, and return to (4.3); If coupling number is greater than t, or iter 1> iter_max 1, proceed to (4.5);
(5) utilize threshold value η 2do SIFT characteristic matching, and make iter 2=iter 2+ 1;
(6) if coupling number is less than 2, and iter 2< iter_max 2, make η 22+ 0.02, and return to (4.5); If coupling number is greater than 2 and be less than 5, and iter 2< iter_max 2, make η 22-0.01, and return to (4.5); Otherwise, proceed to (4.7);
(7) obtain and utilize η 1the dense Stereo Matching collection B={ (b obtaining j, b ' j), j=1,2 ..., q b, q wherein bfor the number of dense Stereo Matching, and utilize η 2the exact matching collection A={ (a obtaining i, a ' i), i=1,2 ..., q a, q wherein anumber for dense Stereo Matching;
(8) calculate the distance ratio of exact matching concentrated first and last two pairs of match points
(9) the every a pair of coupling (b concentrating for dense Stereo Matching j, b ' j), first calculate then calculate and obtain calculate afterwards θ=atan ((a t(y)-b j(y))/(a t(x)-b j(x))), θ '=atan ((a ' t(y)-b ' j(y))/(a t(x)-b j)), and calculate sig (x) x=sign (a t(y)-b j(y)), sig y=sign (a t(x)-b j(x)), sig ' x=sign (a ' t(y)-b ' j(y)), sig y=sign (a ' t(x)-b ' j(x)).If preserve current matching, otherwise this coupling is deleted.Finally obtain finally mating C={ (c k, c ' k), k=1,2 ..., q c, q wherein cnumber for final coupling.
5. at front frame facial image I 1middle extraction space vector V={V k, k=1,2 ..., n}, wherein n is human face characteristic point number.
6. according to the mapping M obtaining in step 1 vthe match point obtaining in parameter and step (4), sets up test phase dispersed feature point locus vector V and sends into mapping M as input v, export people's face calibration point of answering in contrast, obtain following the tracks of for present frame the initial value of active apparent model used.
The present invention, verifies the active appearance models initial method based on returning proposed by the invention as test pattern with each two field picture in FGNet Talking Face video.
Fig. 2 has provided the tracking results of sample frame image in FGNet Talking Face video.
Fig. 3 has provided the AAM tracking results of active appearance models initial method and the tracking error comparison of traditional AAM tracking results based on returning based on proposed by the invention, and error formula is suc as formula shown in (1), and wherein real human face calibration point coordinate is (x 0 i, y 0 i), people's face calibration point coordinate that algorithm obtains is (x i, y i), i=1 wherein ..., N, N is calibration point number, algorithm N=66 in literary composition.
e = 1 MN &Sigma; k = 1 M &Sigma; i = 1 N ( x i - x i 0 ) 2 + ( y i - y i 0 ) 2 - - - ( 1 )
Wherein M is the frame number of tracking.
As can be seen from the figure, compare with traditional AAM initial method, the initial method that the present invention proposes can help AAM to obtain more accurate tracking results.

Claims (4)

1. the active appearance models initial method based on returning, it is characterized in that: when utilizing active appearance models to carry out human face characteristic point from motion tracking, suppose the target position information of the first frame in known video tracking, utilize local feature corresponding method to obtain the partial points feature correspondence at random of front and back two field picture.Utilize core ridge regression algorithm to set up the locus mapping relations between local feature point at random and face characteristic calibration point, thereby complete the initial work of active appearance models, its specific implementation step is as follows:
(1) selected training video, utilizes core ridge regression algorithm to set up the mapping M of locus between local feature point at random and face characteristic calibration point v;
(2) utilize Cascade Adaboost algorithm to detect people's face, and facial image is normalized to 250*250 size;
(3) calculate front frame and rebuild the error between image and current reconstruction image i wherein 1and I 2be respectively front frame facial image and current facial image, x is average shape s 0under pixel set, p is from average shape s 0to the deformation parameter of current reconstruction shape s, W (x; P) for rebuilding the pixel set under shape s, N is the number of pixel under average shape; As e > e 0time, before illustrating, frame facial image and current facial image difference are larger, proceed to step (3), otherwise, proceed to step (5), wherein e 0=5e-4 is error threshold;
(4) extract front frame facial image I 1with present frame facial image I 2sIFT feature, utilize the feature matching method based on dual threshold to mate, obtain coupling to C={ (c k, c ' k), k=1,2 ..., q c, q wherein cfor coupling is to number;
(5) at front frame facial image I 1middle extraction space vector V={V k, k=1,2 ..., n}, wherein n is human face characteristic point number;
(6) according to the mapping M obtaining in step (1) vthe match point obtaining in parameter and step (4), sets up test phase dispersed feature point locus vector V and sends into mapping M as input v, export people's face calibration point of answering in contrast, obtain following the tracks of for present frame the initial value of active apparent model used.
2. the active appearance models initial method based on returning according to claim 1, wherein step (1) is performed as follows:
(1.1) data initialization, order is k=0 constantly;
(1.2) at front frame facial image I 1(k) with current facial image I 2(k) between, by setting up the matching process of equalization probability model, obtain match point at random, make k=k+1; ;
(1.3), according to match point at random, in frame facial image and current facial image, obtain KRR training data in the past T V j ( k ) = { ( V k j , V k &prime; j ) , j = 1,2 , . . . , m } , Wherein m=66 is calibration point number;
(1.4) if k < is n, return to (1.2), otherwise obtain total training data wherein n=100 is training sample logarithm;
(1.5) to everyone face calibration point j, first according to training data calculate kernel matrix K j, wherein k 1=1,2 ... n, k 2==1,2 ... n, wherein σ=0.025; Then create size and matrix K jidentical unit matrix I, wherein I (k 1, k 2)=1, k 1=1,2 ... n, k 2==1,2 ... n; Calculate afterwards core coefficient vector α j, wherein λ=0.5 * 10 wherein -7; Finally, according to above-mentioned calculating, obtain returning kernel function
(1.6) obtain mapping set M v=fj (V), and j=1,2 ..., m}.
3. the active appearance models initial method based on returning according to claim 2, wherein step (1.3) is performed as follows:
(1.3.1) establishing j is I 1(k) the current calibration point in, o is I 1(k) current t proximity matching dot center in, o ' is the I corresponding with it 2(k) current t match point center in;
(1.3.2) calculate distance from match point i (value of i is match point number) to the central point o of match point and the angle (d straight line oi and x axle i, θ i), and the angle the distance of the central point o ' from match point i ' to match point and straight line o ' i ' and x axle (d ' i, θ ' i);
(1.3.3) calculate I 1(k) angle (d the distance from calibration point j to central point o in and straight line oj and x axle l, θ l);
(1.3.4) calculate I 2(k) angle (d the distance from calibration point j ' to central point o ' in and straight line o ' j ' and x axle r, θ r);
(1.3.5) form the input training data V with respect to calibration point p j=(d 1, θ 1..., d 6, θ 6, d ' i, θ ' 1..., d ' 6, θ ' 6, d l, θ l) and export accordingly training data V j=(Δ d, Δ θ) forms k training sample constantly Δ d=d wherein r/ d l, Δ θ=θ rl;
(1.3.6) j and j ' are added to match point set as new match point, continuous iterative loop, until travel through all calibration points.
4. the active appearance models initial method based on returning according to claim 1, wherein step (4) is performed as follows:
(4.1) dual threshold initialization: establish threshold value initial value η 1=1.5, η 2=8; Cycle index iter 1=0, iter 2=0; Cycle index restriction iter_max 1=10, iter_max 2=20, proximity matching point number t=5;
(4.2) the past frame facial image I 1with current facial image I 2middle extraction SIFT feature;
(4.3) utilize threshold value η 1do SIFT characteristic matching, and make iter 1=iter 1+ 1;
(4.4) if coupling number is less than t, and iter 1< iter_max 1, make η 11+ 0.15, and return to (4.3); If coupling number is greater than t, or iter 1> iter_max 1, proceed to (4.5);
(4.5) utilize threshold value η 2do SIFT characteristic matching, and make iter 2=iter 2+ 1;
(4.6) if coupling number is less than 2, and iter 2< iter_max 2, make η 22+ 0.02, and return to (4.5); If coupling number is greater than 2 and be less than 5, and iter 2< iter_max 2, make η 22-0.01, and return to (4.5); Otherwise, proceed to (4.7);
(4.7) obtain and utilize η 1the dense Stereo Matching collection B={ (b obtaining j, b ' j), j=1,2 ..., q b, q wherein bfor the number of dense Stereo Matching, and utilize η 2the exact matching collection A={ (a obtaining i, a ' i), i=1,2 ..., q a, q wherein anumber for dense Stereo Matching;
(4.8) calculate the distance ratio of exact matching concentrated first and last two pairs of match points
(4.9) the every a pair of coupling (b concentrating for dense Stereo Matching j, b ' j), first calculate then calculate and obtain calculate afterwards θ=atan ((a t(y)-b j(y))/(a t(x)-b j(x))), θ '=atan ((a ' t(y)-b ' j(y))/(a t(x)-b j)), and calculate sig (x) x=sign (a i(y)-b j(y)), sig y=sign (a t(x)-b j(x)), sig ' x=sign (a ' t(y)-b ' j(y)), sig y=sign (a ' t(x)-b ' j(x)); If preserve current matching, otherwise this coupling is deleted; Finally obtain finally mating C={ (c k, c ' k), k=1,2 ..., q c, q wherein cnumber for final coupling.
CN201310090347.5A 2013-03-10 2013-03-10 Regression-based active appearance model initialization method Pending CN104036229A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310090347.5A CN104036229A (en) 2013-03-10 2013-03-10 Regression-based active appearance model initialization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310090347.5A CN104036229A (en) 2013-03-10 2013-03-10 Regression-based active appearance model initialization method

Publications (1)

Publication Number Publication Date
CN104036229A true CN104036229A (en) 2014-09-10

Family

ID=51466995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310090347.5A Pending CN104036229A (en) 2013-03-10 2013-03-10 Regression-based active appearance model initialization method

Country Status (1)

Country Link
CN (1) CN104036229A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105068748A (en) * 2015-08-12 2015-11-18 上海影随网络科技有限公司 User interface interaction method in camera real-time picture of intelligent touch screen equipment
CN105528364A (en) * 2014-09-30 2016-04-27 株式会社日立制作所 Iterative video image retrieval method and device
CN106682582A (en) * 2016-11-30 2017-05-17 吴怀宇 Compressed sensing appearance model-based face tracking method and system
CN107391851A (en) * 2017-07-26 2017-11-24 江南大学 A kind of glutamic acid fermentation process soft-measuring modeling method based on core ridge regression
CN108268840A (en) * 2018-01-10 2018-07-10 浙江大华技术股份有限公司 A kind of face tracking method and device
CN108475424A (en) * 2016-07-12 2018-08-31 微软技术许可有限责任公司 Methods, devices and systems for 3D feature trackings
CN109934042A (en) * 2017-12-15 2019-06-25 吉林大学 Adaptive video object behavior trajectory analysis method based on convolutional neural networks
CN111348029A (en) * 2020-03-16 2020-06-30 吉林大学 Method for determining optimal value of calibration parameter of hybrid electric vehicle by considering working condition
CN112800957A (en) * 2021-01-28 2021-05-14 内蒙古科技大学 Video pedestrian re-identification method and device, electronic equipment and storage medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105528364A (en) * 2014-09-30 2016-04-27 株式会社日立制作所 Iterative video image retrieval method and device
CN105068748A (en) * 2015-08-12 2015-11-18 上海影随网络科技有限公司 User interface interaction method in camera real-time picture of intelligent touch screen equipment
CN108475424A (en) * 2016-07-12 2018-08-31 微软技术许可有限责任公司 Methods, devices and systems for 3D feature trackings
CN108475424B (en) * 2016-07-12 2023-08-29 微软技术许可有限责任公司 Method, apparatus and system for 3D face tracking
CN106682582A (en) * 2016-11-30 2017-05-17 吴怀宇 Compressed sensing appearance model-based face tracking method and system
CN107391851A (en) * 2017-07-26 2017-11-24 江南大学 A kind of glutamic acid fermentation process soft-measuring modeling method based on core ridge regression
CN109934042A (en) * 2017-12-15 2019-06-25 吉林大学 Adaptive video object behavior trajectory analysis method based on convolutional neural networks
CN108268840A (en) * 2018-01-10 2018-07-10 浙江大华技术股份有限公司 A kind of face tracking method and device
CN111348029A (en) * 2020-03-16 2020-06-30 吉林大学 Method for determining optimal value of calibration parameter of hybrid electric vehicle by considering working condition
CN111348029B (en) * 2020-03-16 2021-04-06 吉林大学 Method for determining optimal value of calibration parameter of hybrid electric vehicle by considering working condition
CN112800957A (en) * 2021-01-28 2021-05-14 内蒙古科技大学 Video pedestrian re-identification method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN104036229A (en) Regression-based active appearance model initialization method
Niu et al. Unsupervised saliency detection of rail surface defects using stereoscopic images
Wang et al. Scene flow to action map: A new representation for rgb-d based action recognition with convolutional neural networks
CN103824050B (en) A kind of face key independent positioning method returned based on cascade
CN103208123B (en) Image partition method and system
CN102880877A (en) Target identification method based on contour features
CN103854283A (en) Mobile augmented reality tracking registration method based on online study
CN103839277A (en) Mobile augmented reality registration method of outdoor wide-range natural scene
CN101650777B (en) Corresponding three-dimensional face recognition method based on dense point
CN111460980B (en) Multi-scale detection method for small-target pedestrian based on multi-semantic feature fusion
CN104318569A (en) Space salient region extraction method based on depth variation model
CN105224937A (en) Based on the semantic color pedestrian of the fine granularity heavily recognition methods of human part position constraint
CN104517095A (en) Head division method based on depth image
CN105243376A (en) Living body detection method and device
Budvytis et al. Large scale labelled video data augmentation for semantic segmentation in driving scenarios
CN107657625A (en) Merge the unsupervised methods of video segmentation that space-time multiple features represent
CN102385690A (en) Target tracking method and system based on video image
CN102663351A (en) Face characteristic point automation calibration method based on conditional appearance model
Redondo-Cabrera et al. All together now: Simultaneous object detection and continuous pose estimation using a hough forest with probabilistic locally enhanced voting
CN103971112A (en) Image feature extracting method and device
CN104599291B (en) Infrared motion target detection method based on structural similarity and significance analysis
CN103824305A (en) Improved Meanshift target tracking method
Lu et al. A CNN-transformer hybrid model based on CSWin transformer for UAV image object detection
CN104143091A (en) Single-sample face recognition method based on improved mLBP
CN103955950A (en) Image tracking method utilizing key point feature matching

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140910