CN1866272A - Feature point positioning method combined with active shape model and quick active appearance model - Google Patents

Feature point positioning method combined with active shape model and quick active appearance model Download PDF

Info

Publication number
CN1866272A
CN1866272A CN 200610027975 CN200610027975A CN1866272A CN 1866272 A CN1866272 A CN 1866272A CN 200610027975 CN200610027975 CN 200610027975 CN 200610027975 A CN200610027975 A CN 200610027975A CN 1866272 A CN1866272 A CN 1866272A
Authority
CN
China
Prior art keywords
model
unique point
shape
point
lucas
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200610027975
Other languages
Chinese (zh)
Other versions
CN100383807C (en
Inventor
杜春华
杨杰
吴证
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Ruishi Machine Vision Technology Co.
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CNB2006100279759A priority Critical patent/CN100383807C/en
Publication of CN1866272A publication Critical patent/CN1866272A/en
Application granted granted Critical
Publication of CN100383807C publication Critical patent/CN100383807C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The disclosed characteristic point locating method comprises: (1) building Lucas AAM model, calculating initial parameters, and obtaining initial position; (2) building ASM model; (3) searching human face characteristic point by the initial position from (1) and Lucas AAM method; and (4) searching new position by the initial position from (3) and ASM method. This invention makes good use of Lucas AAM and ASM methods, and increases search speed.

Description

The characteristic point positioning method of combining movement shape and Fast Activities display model
Technical field
What the present invention relates to is a kind of method of technical field of image processing, specifically is the characteristic point positioning method of a kind of combining movement shape and Fast Activities display model.
Background technology
The research in people's face field receives the concern of increasing researcher in recent years, and the face characteristic point location is the key link of whole people's face research field, and the accuracy of face characteristic point location directly affects the various aspects of follow-up people's face research.
Find by prior art documents, " moving shape model-its training and application " that T.F.Cootes etc. delivered on " computer vision and image understanding " (the 38th page of first phase nineteen ninety-five), ASM (moving shape model) method that this article proposes, in this method, when carrying out the search of unique point reposition, on perpendicular to the one-dimensional profile on the direction of former and later two unique point lines, find the center of the sub-profile that makes the mahalanobis distance minimum and set the reposition that this center is current unique point, but this method is very responsive to the initial position of unique point, if the initial position of current unique point is when its target location, search precision is higher, and when wide position, initial position, search precision can sharply descend, so the Local Search precision height of this method, and the global search precision is lower.Iain Matthews etc. " the movable appearance model review " on " the international periodical of computer vision " (2004 the 2nd phase the 135th page), delivered simultaneously, this article has proposed a kind of fast face characteristic point positioning method that utilizes image alignment method principle to carry out the AAM search, this method has extraordinary global search effect, even initial characteristics point wide position, this method also can search near the target location.But the Local Search effect of this method is not as the ASM method, and as a rule, this method can not fully accurately be located this unique point.
Summary of the invention
The present invention is directed to the deficiencies in the prior art, a kind of man face characteristic point positioning method in conjunction with ASM and Lucas AAM is proposed, it is combined ASM and two methods of Lucas AAM (Fast Activities display model) carry out the face characteristic point location, the shortcoming that ASM global search precision is low can be compensated by the effective advantage of Lucas AAM global search like this, meanwhile, the characteristics of Lucas AAM Local Search difference also can be compensated by the effective advantage of ASM Local Search, therefore the two is combined and can make up for each other's deficiencies and learn from each other, Sou Suo characteristic point position can be very accurate like this.Simultaneously, because Lucas AAM method is relatively slow, incorporated the speed that also can improve the full feature point search after the ASM method greatly.
The present invention is achieved by the following technical solutions, comprises the steps:
(1) set up Lucas AAM model, calculate initial parameter, provide the initial position of model;
(2) set up the local grain of ASM model and unique point;
(3) initial position of the model that obtains with step (1) and with Lucas AAM method seeker face characteristic point;
(4) unique point that arrives with Lucas AAM pattern search is as initial position, with ASM method search characteristics point.
In the described step (1), set up Lucas AAM model, be meant: at first select the principal character point of k people's face on each training sample image of training set, the shape of this k unique point composition can be by a vector x (i)=[x 1, x 2..., x k, y 1, y 2..., y k] represent, unique point with identical numbering has been represented identical feature in different images, n training sample image makes their represented shapes the most approaching on size, direction and position just to n shape vector should be arranged thereby calibrate this n vector then.Then the shape vector after n the calibration is carried out PCA (pivot analysis) and handle, finally any one shape can be expressed as x=x+Pb, wherein b=P T(x-x), b have represented the situation of change of preceding t maximum pattern, have so just set up Lucas AAM shape.Set up the characteristic point position of training sample image and the mapping relations between the average shape x with piecewise linearity affine deformation method then, and training sample image is deformed to average shape x with this relation, and the gray-scale value of each picture element in the average shape after the distortion pulls into a vector, the i.e. texture of this training sample image, the length of this texture is the number of the inner picture element of average shape x, n training sample image is just to there being n texture vector, then n texture vector carried out pivot analysis and handle, finally any one texture can be expressed as A ( x ) = A 0 ( x ) + Σ i = 1 m λ i A i ( x ) , ∀ x ∈ x ‾ , So just set up Lucas AAM texture model.
In the described step (1), calculate initial parameter, be meant: compute gradient decline image  A wherein 0Be the gradient of average texture, and
Figure A20061002797500063
It is the Jacobian of piecewise linearity affine deformation.According to H = Σ x [ ▿ A 0 ∂ W ∂ p ] T [ ▿ A 0 ∂ W ∂ p ] Calculate Hessian (a kind of Jacobi matrix) matrix H.
In the described step (1), the initial position of computation model is meant: find two positions with the variance projection function on facial image, and set in two point coordinate and be [X1, Y1].To the above-mentioned average shape model x that tries to achieve, the center of calculating left and right sides eyeball four unique points on every side respectively is as left and right sides eye position, thereby obtain two middle point coordinate [X2, Y2], then whole average shape model x translation [X1-X2, Y1-Y2], so just obtain the initial position of model, thereby can be used for search.
Described step (2) is meant: the foundation for the ASM model is the same with the foundation of shape in the previous step.Also need to set up its local grain for each unique point in the training sample image, be that m pixel respectively selected on the both sides, center on perpendicular to former and later two unique point line directions of current unique point promptly with current unique point, calculate this (2m+1) thus the gray-scale value derivative of individual pixel and normalization obtain a profile (vector that is made of the derivative of the gray-scale value of pixel).The profile that remembers j unique point in i the shape vector is g Ij, then j unique point profile's is average, g j ‾ = 1 n Σ i = 1 n g ij , Its variance is C j = 1 n Σ i = 1 n ( g ij - g j ‾ ) · ( g ij - g j ‾ ) T , K unique point all calculated the average and variance of its profile, thereby just obtained the local grain of k unique point.
Described step (3) is meant: the initial position of the model that obtains with step (1) is searched for Lucas AAM method, and concrete steps are as follows:
A) by current p according to x=x+Pb calculated characteristics point position, the deformation texture that current characteristic point position is surrounded with segmentation linear affine deformation method is to x, and obtains the vectorial I (W (x of a texture; P)).
B) calculated difference image I (W (x; P))-A 0(x).
C) calculate Σ x [ ▿ A 0 ∂ W ∂ p ] T [ I ( W ( x ; p ) ) - A 0 ( x ) ] .
D) calculate Δp = H - 1 Σ x [ ▿ A 0 ∂ W ∂ p ] T [ I ( W ( x ; p ) ) - A 0 ( x ) ] .
E) by formula W (x; P) ← W (x; P) ο W (x; Δ p) -1Renewal obtains new P value.
After iterating, obtain new shape by formula x=x+Pb, i.e. the position of unique point.
Described step (4) is meant: as initial position, and utilize the ASM searching method to carry out unique point search in image with the Search Results that obtains in the previous step, this search procedure mainly is that the variation by affined transformation and parameter b realizes.Specifically realize by following two steps that iterate:
A) calculate the reposition of each unique point
At first initial ASM model is covered on the image, for j unique point in the model, be that the individual pixel of l (l is greater than m) is respectively selected on the both sides, center on perpendicular to its former and later two unique point line directions with it, thereby the gray-scale value derivative and the normalization of calculating this l pixel then obtain a profile, in this new profile, get length and be designated as temp (P), define an energy function for the sub-profile of (2*m+1) f j ( p ) = ( temp ( P ) - g j ‾ ) · C j - 1 · ( temp ( P ) - g j ‾ ) T , With this energy function pass judgment on current sub-profile with Between similarity, select to make f j(p) Zui Xiao position is as the reposition of this unique point, and calculates it and change dX j, each unique point is all carried out such calculating just obtains k change in location dX i, i=1,2 ..., k, and form a vectorial dX=(dX 1, dX 2..., dX k).
B) renewal of parameter in the affined transformation and b
By formula X=M (s, θ) [x]+X c: M (s (1+ds), (θ+d θ)) [x+dx]+(X c+ dX c)=(X+dX), M (s (1+ds), (θ+d θ)) [x+dx]=M (s, θ) [x]+dX+X c-(X c+ dX c), by formula x=x+Pb, existing hope finds db to make can get db=P by formula x=x+Pb by x+dx=x+P (b+db) -1Dx so just can make following renewal: X to parameter c=X c+ w tDX c, Y c=Y c+ w tDY c, θ=θ+w θD θ, b=b+W bDb, w in the formula t, w θ, w s, W bBe to be used for the weights that controlled variable changes, can obtain new shape by formula x=x+Pb like this.
The man face characteristic point positioning method in conjunction with ASM and these two kinds of methods of Lucas AAM that the present invention proposes has very high precision.Owing to use Lucas AAM method to carry out the coarse search of unique point at the beginning, this can find the approximate location of unique point, and then with faceted search before the ASM method to the position carry out the essence search of unique point as initial position, search precision is very high under this prerequisite, but also is difficult for being absorbed in local minimum.With man face characteristic point positioning method and original ASM method in conjunction with ASM and these two kinds of methods of Lucas AAM that the face database contrast the present invention who takes proposes, the average error of both positioning feature point of front and back is respectively 2.1 pixels and 4.5 pixels.Experiment shows that the method that the present invention proposes is more accurate than further feature independent positioning method.Replace Lucas AAM method with the ASM method simultaneously in subsequent searches, its speed also is greatly improved.
Description of drawings
Fig. 1 is the facial image that indicates unique point.
Fig. 2 is the result of eye location.
Fig. 3 is the initial position of unique point.
The result of Fig. 4 for obtaining after the Lucas AAM search.
The result of Fig. 5 for obtaining after the ASM search.
Embodiment
Below in conjunction with a specific embodiment technical scheme of the present invention is described in further detail.
The image that embodiment adopts is from the facial image database of taking.Whole implement process is as follows:
1. from face database, select 500 facial images of having marked unique point to set up the ASM model.Marked the facial image of unique point, as shown in Figure 1.Corresponding 500 shape vectors and texture vector are done the foundation that Lucas AAM model has promptly been finished in the pivot analysis processing respectively.Any one shape can be expressed as x=x+Pb like this, and any one texture can be expressed as simultaneously A ( x ) = A 0 ( x ) + Σ i = 1 m λ i A i ( x ) , ∀ x ∈ x ‾ . Calculate initial parameter: compute gradient decline image  A wherein 0Be the gradient of average texture, and
Figure A20061002797500093
It is the Jacobian of the linear affine deformation of segmentation.According to H = Σ x [ ▿ A 0 ∂ W ∂ p ] T [ ▿ A 0 ∂ W ∂ p ] Calculate the Hessian matrix H.
The initial position of computation model: two positions about finding with the variance projection function on the facial image, they are respectively [274,229] and [371,228], as shown in Figure 2, thereby can obtain point coordinate in two [322.5,228.5].To the above-mentioned average shape model x that tries to achieve, the center of calculating left and right sides eyeball four unique points on every side respectively is as left and right sides eye position, they are respectively [52.79 ,-48.76] and [52.03 ,-49.22], thereby obtain two middle point coordinate [0.37,-48.99], then whole average shape model x translation [322.5-(0.37), 228.5-(48.99)], so just obtain the initial position of model, as shown in Figure 3.
2. the same with the foundation of shape in the previous step for the foundation of ASM model.Also need to set up its local grain for each unique point in the training sample image, be that 5 pixels are respectively selected on the both sides, center on perpendicular to former and later two unique point line directions of current unique point promptly with current unique point, calculate this 11 (2*5+1) thus the gray-scale value derivative of individual pixel and normalization obtain a profile.The profile that remembers j unique point in i the shape vector is g Ij, then j unique point profile's is average, g j ‾ = 1 n Σ i = 1 n g ij , Its variance is C j = 1 n Σ i = 1 n ( g ij - g j ‾ ) · ( g ij - g j ‾ ) T , 60 unique points are all calculated the average and variance of its profile, thereby just obtained the local grain of 60 unique points.
With resulting initial position in the first step as reference position, search for Lucas AAM method, iteration is 30 times altogether, Search Results is as shown in Figure 4.
With the Search Results that obtains in the previous step as initial position, and utilize the ASM searching method in image, to carry out unique point search.Just can finally locate 60 unique points through 5 step iteration, as shown in Figure 5.
By experiment, can find that method proposed by the invention all is greatly improved than two kinds of original methods on precision and speed.

Claims (7)

1. the characteristic point positioning method of combining movement shape and Fast Activities display model is characterized in that, comprises the steps:
(1) set up Lucas AAM model, calculate initial parameter, provide the initial position of model;
(2) set up the local grain of ASM model and unique point;
(3) initial position of the model that obtains with step (1) and with Lucas AAM method seeker face characteristic point;
(4) unique point that arrives with Lucas AAM pattern search is as initial position, with ASM method search characteristics point.
2. the characteristic point positioning method of combining movement shape according to claim 1 and Fast Activities display model, it is characterized in that, in the described step (1), set up Lucas AAM model, be meant: at first select k people's face principal character point on each sample image of training set, the shape of this k unique point composition is by a vector x (i)=[x 1, x 2..., x k, y 1, y 2..., y k] represent, the unique point of identical numbering has been represented identical feature in different images, n sample image is just to there being n shape vector, calibrating this n vector makes their represented shapes the most approaching on size, direction and position, shape vector after n the calibration is carried out pivot analysis to be handled, any one shape all is expressed as x=x+Pb, wherein b=P T. (x-x), b has represented the situation of change of a preceding t max model, and this has just set up Lucas AAM shape; Set up the unique point of sample image and the mapping relations between the x with piecewise linearity affine deformation method, and sample image is deformed to average shape x with this relation, and the gray-scale value of each picture element in the average shape after the distortion pulls into a vector, the i.e. texture of this training sample image, the length of this texture is the number of the inner picture element of average shape x, n training sample image is just to there being n texture vector, then n texture vector carried out pivot analysis and handle, finally any one texture all is expressed as A ( x ) = A 0 ( x ) + Σ i = 1 m λ i A i ( x ) , ∀ x ∈ x - , So just set up Lucas AAM texture model.
3. the characteristic point positioning method of combining movement shape according to claim 1 and Fast Activities display model is characterized in that, in the described step (1), calculates initial parameter, is meant: compute gradient decline image
Figure A2006100279750002C2
 A 0Be the average texture gradient,
Figure A2006100279750002C3
Be the Jacobian of piecewise linearity affine deformation, by H = Σ x [ ▿ A 0 ∂ W ∂ p ] T [ ▿ A 0 ∂ W ∂ p ] Calculate the Hessian matrix.
4. the characteristic point positioning method of combining movement shape according to claim 1 and Fast Activities display model, it is characterized in that, in the described step (1), the initial position of computation model, be meant: find two positions with the variance projection function, and set that point coordinate is [X1 in two, Y1], to the above-mentioned average shape model x that tries to achieve, the center of calculating four unique points around the eyeball of the left and right sides respectively is as left and right sides eye position, thereby obtains two middle point coordinate [X2, Y2], whole average shape model x translation [X1-X2, Y1-Y2], so just obtain the initial position of model then.
5. the characteristic point positioning method of combining movement shape according to claim 1 and Fast Activities display model, it is characterized in that, described step (2), be meant: the foundation for the ASM model is the same with the foundation of shape in the previous step, also need to set up its local grain for each unique point in the training sample image, be that m pixel respectively selected on the both sides, center on perpendicular to former and later two unique point line directions of current unique point promptly with current unique point, thereby the gray-scale value derivative and the normalization of calculating this 2m+1 pixel obtain a profile, remember that the profile of j unique point in i the shape vector is g Ij, then j unique point profile's is average, g j ‾ = 1 n Σ i = 1 n g ij , Its variance is C j = 1 n Σ i = 1 n ( g ij - g j ‾ ) · ( g ij - g j ‾ ) T , K unique point all calculated the average and variance of its profile, thereby just obtained the local grain of k unique point.
6. the characteristic point positioning method of combining movement shape according to claim 1 and Fast Activities display model is characterized in that, described step (3) is meant: the initial position with model is searched for Lucas AAM method, and concrete steps are as follows:
A) calculate current characteristic point position by current p according to x=x+Pb, the linear affine deformation method of usefulness segmentation to x, obtains a texture vector I (W (x to the deformation texture of current unique point encirclement; P));
B) calculated difference image I (W (x; P))-A 0(x);
C) calculate Σ x [ ▿ A 0 ∂ W ∂ p ] T [ I ( W ( x ; p ) ) - A 0 ( x ) ] ;
D) calculate Δp = H - 1 Σ x [ ▿ A 0 ∂ W ∂ p ] T [ I ( W ( x ; p ) ) - A 0 ( x ) ] ;
E) by formula W (x; P) ← W (x; P) о W (x; Δ p) -1Renewal obtains new P value;
After iterating, obtain new shape by formula x=x+Pb, i.e. the position of unique point.
7. the characteristic point positioning method of combining movement shape according to claim 1 and Fast Activities display model, it is characterized in that, described step (4), be meant: the result who searches with Lucas AAM is as initial position, and utilize the ASM searching method in image, to carry out the unique point search, this search procedure mainly is that the variation by affined transformation and parameter b realizes, specifically realizes by following two steps that iterate:
A) calculate the reposition of each unique point
For j unique point in the model, be that 1 pixel is respectively selected on the both sides, center on perpendicular to its former and later two unique point line directions with it, 1 greater than m, thereby the gray-scale value derivative and the normalization of calculating this 1 pixel then obtain a profile, in this new profile, get length and be designated as temp (P), define an energy function for the sub-profile of (2*m+1) f j ( p ) = ( temp ( P ) - g j ‾ ) · C j - 1 · ( temp ( P ) - g j ‾ ) T , With this energy function pass judgment on current sub-profile with
Figure A2006100279750004C2
Between similarity, select to make f j(p) Zui Xiao position is as the reposition of this unique point, and calculates it and change dX j, each unique point is all carried out such calculating just obtains k change in location dX i, i=1,2 ..., k, and form a vectorial dX=(dX 1, dX 2..., dX k);
B) renewal of parameter in the affined transformation and b
By formula X=M (s, θ) [x]+X c
: M (s (1+ds), (θ+d θ)) [x+dx]+(X c+ dX c)=(X+dX),
M (s (1+ds), (θ+d θ)) [x+dx]=M (s, θ) [x]+dX+X c-(X c+ dX c), by formula x=x+Pb, existing hope is found db to make and is got db=P by formula x=x+Pb by x+dx=x+P (b+db) -1Dx so just can make following renewal: X to parameter c=X c+ w tDX c, Y c=Y c+ w tDY c, θ=θ+w θD θ, b=b+W bDb, w in the formula t, w θ, w s, W bBe to be used for the weights that controlled variable changes, get new shape by formula x=x+Pb.
CNB2006100279759A 2006-06-22 2006-06-22 Feature point positioning method combined with active shape model and quick active appearance model Expired - Fee Related CN100383807C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006100279759A CN100383807C (en) 2006-06-22 2006-06-22 Feature point positioning method combined with active shape model and quick active appearance model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006100279759A CN100383807C (en) 2006-06-22 2006-06-22 Feature point positioning method combined with active shape model and quick active appearance model

Publications (2)

Publication Number Publication Date
CN1866272A true CN1866272A (en) 2006-11-22
CN100383807C CN100383807C (en) 2008-04-23

Family

ID=37425290

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100279759A Expired - Fee Related CN100383807C (en) 2006-06-22 2006-06-22 Feature point positioning method combined with active shape model and quick active appearance model

Country Status (1)

Country Link
CN (1) CN100383807C (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635028A (en) * 2009-06-01 2010-01-27 北京中星微电子有限公司 Image detecting method and image detecting device
CN101763500B (en) * 2008-12-24 2011-09-28 中国科学院半导体研究所 Method applied to palm shape extraction and feature positioning in high-freedom degree palm image
CN102486376A (en) * 2010-12-04 2012-06-06 鸿富锦精密工业(深圳)有限公司 Image different position annotation system and method
CN102663351A (en) * 2012-03-16 2012-09-12 江南大学 Face characteristic point automation calibration method based on conditional appearance model
CN101989354B (en) * 2009-08-06 2012-11-14 Tcl集团股份有限公司 Corresponding point searching method of active shape model and terminal equipment
CN102831388A (en) * 2012-05-23 2012-12-19 上海交通大学 Method and system for detecting real-time characteristic point based on expanded active shape model
WO2014187223A1 (en) * 2013-05-21 2014-11-27 Tencent Technology (Shenzhen) Company Limited Method and apparatus for identifying facial features
CN105608710A (en) * 2015-12-14 2016-05-25 四川长虹电器股份有限公司 Non-rigid face detection and tracking positioning method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012135979A1 (en) * 2011-04-08 2012-10-11 Nokia Corporation Method, apparatus and computer program product for providing multi-view face alignment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100433621B1 (en) * 2001-08-09 2004-05-31 한국전자통신연구원 Multi layer internet protocol(MLIP) for peer to peer service of private internet and method for transmitting/receiving the MLIP packet
CN1403997A (en) * 2001-09-07 2003-03-19 昆明利普机器视觉工程有限公司 Automatic face-recognizing digital video system
CN1313979C (en) * 2002-05-03 2007-05-02 三星电子株式会社 Apparatus and method for generating 3-D cartoon
CN1731416A (en) * 2005-08-04 2006-02-08 上海交通大学 Method of quick and accurate human face feature point positioning

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763500B (en) * 2008-12-24 2011-09-28 中国科学院半导体研究所 Method applied to palm shape extraction and feature positioning in high-freedom degree palm image
CN101635028A (en) * 2009-06-01 2010-01-27 北京中星微电子有限公司 Image detecting method and image detecting device
CN101989354B (en) * 2009-08-06 2012-11-14 Tcl集团股份有限公司 Corresponding point searching method of active shape model and terminal equipment
CN102486376A (en) * 2010-12-04 2012-06-06 鸿富锦精密工业(深圳)有限公司 Image different position annotation system and method
CN102663351A (en) * 2012-03-16 2012-09-12 江南大学 Face characteristic point automation calibration method based on conditional appearance model
CN102831388B (en) * 2012-05-23 2015-10-14 上海交通大学 Based on real-time characteristic point detecting method and the system of the moving shape model of expansion
CN102831388A (en) * 2012-05-23 2012-12-19 上海交通大学 Method and system for detecting real-time characteristic point based on expanded active shape model
WO2014187223A1 (en) * 2013-05-21 2014-11-27 Tencent Technology (Shenzhen) Company Limited Method and apparatus for identifying facial features
CN104182718A (en) * 2013-05-21 2014-12-03 腾讯科技(深圳)有限公司 Human face feature point positioning method and device thereof
US9355302B2 (en) 2013-05-21 2016-05-31 Tencent Technology (Shenzhen) Company Limited Method and electronic equipment for identifying facial features
CN104182718B (en) * 2013-05-21 2019-02-12 深圳市腾讯计算机系统有限公司 A kind of man face characteristic point positioning method and device
CN105608710A (en) * 2015-12-14 2016-05-25 四川长虹电器股份有限公司 Non-rigid face detection and tracking positioning method
CN105608710B (en) * 2015-12-14 2018-10-19 四川长虹电器股份有限公司 A kind of non-rigid Face datection and tracking positioning method

Also Published As

Publication number Publication date
CN100383807C (en) 2008-04-23

Similar Documents

Publication Publication Date Title
CN1866272A (en) Feature point positioning method combined with active shape model and quick active appearance model
CN1786980A (en) Melthod for realizing searching new position of person's face feature point by tow-dimensional profile
CN1731416A (en) Method of quick and accurate human face feature point positioning
CN106097322B (en) A kind of vision system calibration method based on neural network
CN103514441B (en) Facial feature point locating tracking method based on mobile platform
CN110146099B (en) Synchronous positioning and map construction method based on deep learning
CN1776711A (en) Method for searching new position of feature point using support vector processor multiclass classifier
CN109308693A (en) By the target detection and pose measurement list binocular vision system of a ptz camera building
CN112364931B (en) Few-sample target detection method and network system based on meta-feature and weight adjustment
CN104700412B (en) A kind of calculation method of visual saliency map
CN110136177B (en) Image registration method, device and storage medium
CN104091350B (en) A kind of object tracking methods of utilization motion blur information
CN110929748A (en) Motion blur image feature matching method based on deep learning
KR20130098824A (en) 3d facial pose and expression estimating method using aam and estimated depth information
CN107563323A (en) A kind of video human face characteristic point positioning method
CN111998862A (en) Dense binocular SLAM method based on BNN
CN109584347B (en) Augmented reality virtual and real occlusion processing method based on active appearance model
CN103593639A (en) Lip detection and tracking method and device
CN105869153A (en) Non-rigid face image registering method integrated with related block information
CN112184785B (en) Multi-mode remote sensing image registration method based on MCD measurement and VTM
CN111950599B (en) Dense visual odometer method for fusing edge information in dynamic environment
CN111899284B (en) Planar target tracking method based on parameterized ESM network
CN111862236A (en) Fixed-focus binocular camera self-calibration method and system
CN109492530B (en) Robust visual object tracking method based on depth multi-scale space-time characteristics
CN111145216A (en) Tracking method of video image target

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SHANGHAI RUISHI MACHINE VISION TECHNOLOGY CO.,LTD.

Free format text: FORMER OWNER: SHANGHAI JIAO TONG UNIVERSITY

Effective date: 20100920

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 200240 NO.800, DONGCHUAN ROAD, MINHANG DISTRICT, SHANGHAI TO: 200433 BUILDING 11, NO.248, DAXUE ROAD, YANGPU DISTRICT, SHANGHAI

TR01 Transfer of patent right

Effective date of registration: 20100920

Address after: 11, 200433, 248 Yangpu District University Road, Shanghai

Patentee after: Shanghai Ruishi Machine Vision Technology Co.

Address before: 200240 Dongchuan Road, Shanghai, No. 800, No.

Patentee before: Shanghai Jiao Tong University

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080423

Termination date: 20190622

CF01 Termination of patent right due to non-payment of annual fee