CN1687957A - Man face characteristic point positioning method of combining local searching and movable appearance model - Google Patents

Man face characteristic point positioning method of combining local searching and movable appearance model Download PDF

Info

Publication number
CN1687957A
CN1687957A CN 200510026388 CN200510026388A CN1687957A CN 1687957 A CN1687957 A CN 1687957A CN 200510026388 CN200510026388 CN 200510026388 CN 200510026388 A CN200510026388 A CN 200510026388A CN 1687957 A CN1687957 A CN 1687957A
Authority
CN
China
Prior art keywords
vector
face
image
shape
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 200510026388
Other languages
Chinese (zh)
Inventor
杨杰
杜春华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN 200510026388 priority Critical patent/CN1687957A/en
Publication of CN1687957A publication Critical patent/CN1687957A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention is a method for locating characteristic points of a human face, integrated with local search and active appearance model, firstly using a part of a picture of human face with the coordinates of characteristic points as a sample to build a new picture of human face for detecting, so as to obtain a rectangular region containing the human face, locating eyes and mouth in the rectangular region, using the positions of the eyes and mouth as initial positions, searching by the active appearance model, and finally finding many characters points of human face, and thus completing the integral location of the characteristic points of human face. The method can be further applied to recognizing human face, sex, and expression, estimating age, and other aspects.

Description

The man face characteristic point positioning method of combining local searching and movable appearance model
Technical field
The present invention relates to a kind of recognition of face that is applied to, Expression Recognition, sex identification, the man face characteristic point positioning method of the combining local searching of estimation of Age and movable appearance model (AAM), this method relates to fields such as Flame Image Process, mathematical modeling, statistical study.
Background technology
It is the most key technology during recognition of face, Expression Recognition, sex identification, estimation of Age etc. are used that human face characteristic point detects, the accuracy of its characteristic point position location directly has influence on the precision of identification, therefore, the position of locating human face characteristic point exactly can be improved the precision of identification widely.Human face characteristic point mainly comprises the marginal point on the pupil center of circle, canthus, the corners of the mouth, nose, the lower jaw, but put iff relying on these that to carry out recognition of face be far from being enough, therefore also must further find some other human face characteristic point, such as: eyebrows, eyebrow tail, eyebrow peak, the bridge of the nose, lip paddy, lip peak are very difficult yet find these all unique points simultaneously.
Find by prior art documents, (Rein-Lien Hsu such as Rein-Lien Hsu, MohamedAbdel-Mottaleb, Anil K.Jain Face Detection In Color images IEEETRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL 24, NO.5.MAY 2002) once the statistics by great amount of samples provided the distribution of eye areas at each component of YCbCr color space, and locate the Position Approximate of human eye and face roughly according to the distribution of these components, but the number of the unique point that these class methods find very little, do not reach the number of the desired unique point of recognition of face far away.(T.Cootes such as T.Cootes, G.Edwards, and C.Taylor, " Active AppearanceModels; " IEEE Trans.Pattern Analysis and Machine Intelligence, vol.23, no.6, pp.681-685, June 2001.) method that proposed a kind of AAM (movable appearance model) carries out the face characteristic point location, at first manual some unique points of on a part of facial image, demarcating of such global characteristics point searching method, with the method for statistics these characteristic point positions of having demarcated are analyzed then, obtained a general average shape model, then on each facial image, carry out pixel sampling with this shape, carry out statistical study simultaneously and obtain a texture model, then shape and texture model are carried out statistical study and obtain final display model.But this method has very big dependence to the initial position of display model selected, if the initial position of display model is near the unique point of expectation, this model can be very easy to find the unique point of expectation, and iterations is considerably less, if but this display model is away from the characteristic point position of expectation, this model needs a large amount of iterationses just can find the characteristic point position of expectation so, sometimes may be absorbed in the position of a mistake, thereby can not provide correct positioning.
Summary of the invention
The objective of the invention is to overcome deficiency of the prior art, the man face characteristic point positioning method of a kind of combining local searching and movable appearance model is provided.Make the localization method of the new human face characteristic point fast that its this two class methods combine, not only improved feature and counted but also strengthened robustness.Make the man face characteristic point positioning method of its foundation, can be used for recognition of face, sex identification, Expression Recognition, fields such as estimation of Age.
The present invention is achieved by the following technical solutions, the present invention sets up movable appearance model with a part of facial image as sample earlier, the detection of people's face then, the location of eyes and face, the new facial image of one width of cloth is carried out people's face detecting operation, obtain a rectangular area that comprises people's face, on this zone, carry out the location of eyes and face, position with eyes and face is an initial position, search for movable appearance model, finally find numerous human face characteristic points, so just finish the integral body location of human face characteristic point.
The foundation of described movable appearance model, be meant: at first in face database, select a part of n facial image randomly and k unique point of manual demarcation in selected image, just obtain an one-dimensional vector at each image like this, this vector has 2k element, the x coordinate of preceding k unique point of k element representation, the y coordinate of a back k element representation k unique point, n image is with regard to corresponding n such vector, this n vector is carried out treatment for correcting, carry out the PCA processing then and obtain an average shape model mean_shape and Bs; Then with mean_shape as target shape, set up in each training sample image the corresponding relation between the unique point in the unique point and average shape, and use deformation method that the original image face area is deformed to target shape based on corner block with this corresponding relation, simultaneously the image after each distortion is carried out pixel sampling, obtain a texture vector, this n texture vector carried out PCA handle, obtain average texture model mean_tex, and Bg; Because Bs is the matrix of m*n, Bg is the matrix of h * n, they are formed the matrix B of (m+h) * n, this matrix is carried out PCA handle, obtain the matrix B a of an a*n, a is much smaller than (m+h), each column vector of Ba is represented the character shape of the image corresponding with it and the situation of change of gray scale, and just the appearance change situation has so just obtained a movable appearance model, be AAM, so just finished the foundation of model.
Described shape, be meant: this n vector is passed through affined transformation, comprise rotation, translation, convergent-divergent, make itself and first vector the most approaching, so just obtained n new vector, calculated the average of this n new vector, on average made itself and first vector the most approaching by affined transformation to this, obtain one new average, then with this average out to benchmark shape, n new vector by affined transformation make itself and this on average approaching, repeat this process, up to convergence, so just obtained the vector of final n expression shape, they have been formed the matrix of a 2k * n, this matrix has been carried out pivot analysis (PCA) handle, obtain the matrix B s of a m * n and the vectorial mean_shape of 2k * 1, m is much smaller than 2k, and each column vector of Bs is represented the situation of change of unique point coordinate in the image corresponding with it, and mean_shape represents the average of this n vector, just average shape has so just obtained a shape;
Described texture model, be meant: just obtain the texture vector of a correspondence through the image after being out of shape for each, each vector has s element, each element representation corresponding gray, n corresponding n the such vector of image, the matrix of this n-tuple being formed a s * n, this matrix is carried out PCA to be analyzed, obtain the matrix B g of a h * n and the vectorial mean_tex of s * 1, h is much smaller than s, and each column vector of Bg is represented the situation of change of the image pixel gray-scale value corresponding with it, and mean_tex represents the average of this n vector, just average texture has so just obtained a texture model.
Described people's face detects, the location of eyes and face, be meant: great amount of samples is carried out statistical study, provide the colour of skin, eyes and the face distribution situation in the YCbCr space, at first find possible area of skin color according to the distribution of the colour of skin, according to the distribution situation of eyes and face each possible area of skin color is analyzed then, if certain regional existing eye has face again, just think that this zone is a human face region, so both finish the detection of people's face, provided the Position Approximate of eyes and face again.
Described position with eyes and face is an initial position, search for movable appearance model, be meant: with the position of eyes and face initial searching position as display model, use the continuous iteration of movable appearance model searching method then, on facial image, find the position of mating the most with display model, this position is the position of finally finding, and has so just finished the search of full feature point group.
Method of the present invention can obtain higher accuracy rate and very fast speed.Owing to utilize method for detecting human face to orient human face region, location eyes and face on this zone, with the initial position that the position of eyes and face is searched for as AAM, this has just further improved the speed of positioning feature point under the prerequisite of not losing the initial alignment precision.This invention is to combine the advantage that the high and movable appearance model searching method of local feature point searching method precision can be searched for a large amount of unique points and have good robustness.
Embodiment
Below in conjunction with specific embodiment technical scheme of the present invention is described in further detail.
The facial image that embodiment adopts comes in the facial image database of taking.Whole invention implementation procedure is as follows:
1. set up the AAM model.From face database, select n image as training sample, k unique point of manual demarcation on selected facial image, these unique points in each image are formed a vector, this n vector is passed through affined transformation, comprise rotation, translation, convergent-divergent, make itself and first vector the most approaching, so just obtained n new vector, calculate the average of this n vector, on average make itself and first vector the most approaching to this by affined transformation, obtain one new average, then with this average out to benchmark shape, n the vector by affined transformation make itself and this on average approaching, repeat this process, up to convergence, so just obtained the vector of final n expression shape, they have been formed the matrix of a 2k * n, this matrix has been carried out pivot analysis (PCA) handle, obtain the matrix B s of a m * n and the vectorial mean_shape of 2k * 1, m is much smaller than 2k, and each column vector of Bs is represented the situation of change of unique point coordinate in the image corresponding with it, and mean_shape represents the average of this n vector, just average shape has so just obtained a shape; Then with mean_shape as target shape, each image in the training sample is set up in this sample image the corresponding relation between the unique point in the unique point and average shape, and use deformation method that the original image face area is deformed to target shape based on corner block with this corresponding relation, simultaneously the image after each distortion is carried out pixel sampling, obtain a texture vector, each vector has s element, each element representation corresponding gray, n image is with regard to corresponding n such texture vector, this n vector is carried out PCA to be handled, obtain the matrix B g of a h * n and the vectorial mean_tex of s * 1, h is much smaller than s, each column vector of Bg is represented the situation of change of the image pixel gray-scale value corresponding with it, mean_tex represents the average of this n vector, just average texture has so just obtained a texture model.Because Bs is the matrix of m*n, Bg is the matrix of h * n, they are formed the matrix B of (m+h) * n, this matrix is carried out PCA handle, obtain the matrix B a of an a*n, a is much smaller than (m+h), each column vector of Ba is represented the character shape of the image corresponding with it and the situation of change of gray scale, and just therefore the appearance change situation has just obtained a movable appearance model, be AAM, so just finished the foundation of model.
2. a large amount of facial image samples are carried out statistical study, provide the colour of skin, each comfortable YCbCr spatial distributions situation of eyes and face, be that they are at the residing threshold range of YCbCr color space [minskinmaxskin], [mineye maxeye] and [minmouth maxmouth], for each pixel in the image, calculate its value val in the YCbCr space, and judge which threshold range it belongs to, for example val is in [mineye maxeye] this scope and just it is judged to be eye areas, by that analogy, like this each pixel in the image is all finished such operation and also just can provide the colour of skin, the Position Approximate of eyes and face.
With the Position Approximate of eyes and face as the AAM initial position, search for the AAM searching method, find the final position of numerous human face characteristic points.The eyes that find with previous step and the Position Approximate of face are as the initial position of AAM model, the movable appearance model of having set up is placed on initial position, fit one with movable appearance model image of a size and it is converted to a vector with this model, and calculate the difference diff of this vector and average texture mean_tex, then by affined transformation (convergent-divergent, translation, rotation) adjustment of parameter diminishes diff and among the Ba, repeat above-mentioned steps until diff less than certain threshold value, so just finish the search procedure of AAM, also just finished the location of unique point.

Claims (6)

1, the man face characteristic point positioning method of a kind of combining local searching and movable appearance model, it is characterized in that, set up movable appearance model with a part of facial image that has the unique point coordinate position as sample earlier, carrying out people's face then detects, the location of eyes and face, the new facial image of one width of cloth is carried out people's face detecting operation, obtain a rectangular area that comprises people's face, on this zone, carry out the location of eyes and face, position with eyes and face is an initial position, search for movable appearance model, finally find numerous human face characteristic points, so just finish the integral body location of human face characteristic point.
2, the man face characteristic point positioning method of combining local searching according to claim 1 and movable appearance model, it is characterized in that, the foundation of described movable appearance model, be meant: at first in face database, select n facial image randomly and k unique point of manual demarcation in selected image, just obtain an one-dimensional vector at each image like this, this vector has 2k element, the x coordinate of preceding k unique point of k element representation, the y coordinate of a back k element representation k unique point, n image is with regard to corresponding n such vector, this n vector is carried out affined transformation and calibration process, carrying out PCA then handles, obtain a shape, this shape comprises the matrix B s of one m * n and the vectorial mean_shape of 2k * 1; Then with mean_shape coordinate target shape, set up in each training sample image the corresponding relation between the unique point in the unique point and average shape, and use deformation method that the original image face area is deformed to target shape based on corner block with this corresponding relation, simultaneously the image after each distortion is carried out pixel sampling, obtain a texture vector, n image is with regard to corresponding n such vector, this n vector is carried out PCA to be handled, obtain a texture model, this texture model comprises the matrix B g of one h * n and the vectorial mean_tex of s * 1; Because Bs is the matrix of m*n, Bg is the matrix of h * n, they are formed the matrix B of (m+h) * n, this matrix is carried out PCA handle, obtain the matrix B a of an a*n, a is much smaller than (m+h), each column vector of Ba is represented the character shape of the image corresponding with it and the situation of change of gray scale, and just the appearance change situation has so just obtained a movable appearance model, be AAM, so just finished the foundation of model.
3, the man face characteristic point positioning method of combining local searching according to claim 2 and movable appearance model, it is characterized in that, described shape, be meant: this n vector is passed through affined transformation, comprise rotation, translation, convergent-divergent, make itself and first vector the most approaching, so just obtained n new vector, calculate the average of this n vector, on average make itself and first vector the most approaching to this by affined transformation, obtain one new average, then with this average out to benchmark shape, the new vector of n by affined transformation make itself and this on average approaching, repeat this process, up to convergence, so just obtained the vector of final n expression shape, they have been formed the matrix of a 2k * n, this matrix has been carried out pivot analysis handle, obtain the matrix B s of a m * n and the vectorial mean_shape of 2k * 1, m is much smaller than 2k, and each column vector of Bs is represented the situation of change of unique point coordinate in the image corresponding with it, and mean_shape represents the average of this n vector, just average shape has so just obtained a shape.
4, the man face characteristic point positioning method of combining local searching according to claim 2 and movable appearance model, it is characterized in that, described texture model, be meant: the vector that just obtains a correspondence for each distortion back image, each vector has s element, each element representation corresponding gray, n corresponding n the such vector of image, the matrix of this n-tuple being formed a s * n, this matrix is carried out PCA to be analyzed, the matrix B g of a h * n who gets and the vectorial mean_tex of s * 1, h is much smaller than s, and each column vector of Bs is represented the situation of change of the image pixel gray-scale value corresponding with it, and mean_tex represents the average of this n vector, just average texture has so just obtained a texture model.
5, the man face characteristic point positioning method of combining local searching according to claim 1 and movable appearance model, it is characterized in that, described people's face detects, the location of eyes and face, be meant: great amount of samples is carried out statistical study, provide the colour of skin, eyes and the face distribution situation in the YCbCr space, at first find possible area of skin color according to the distribution of the colour of skin, according to the distribution situation of eyes and face each possible area of skin color is analyzed then,, just thought that this zone is a human face region if certain regional existing eye has face again, so both finish the detection of people's face, provided the Position Approximate of eyes and face again.
6, the man face characteristic point positioning method of combining local searching according to claim 1 and movable appearance model, it is characterized in that, described position with eyes and face is an initial position, search for movable appearance model, be meant: with the position of eyes and face initial searching position as display model, use the continuous iteration of movable appearance model searching method then, on facial image, find the position of mating the most with display model, this position is the position of finally finding, and has so just finished the search of full feature point group.
CN 200510026388 2005-06-02 2005-06-02 Man face characteristic point positioning method of combining local searching and movable appearance model Pending CN1687957A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200510026388 CN1687957A (en) 2005-06-02 2005-06-02 Man face characteristic point positioning method of combining local searching and movable appearance model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200510026388 CN1687957A (en) 2005-06-02 2005-06-02 Man face characteristic point positioning method of combining local searching and movable appearance model

Publications (1)

Publication Number Publication Date
CN1687957A true CN1687957A (en) 2005-10-26

Family

ID=35305998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200510026388 Pending CN1687957A (en) 2005-06-02 2005-06-02 Man face characteristic point positioning method of combining local searching and movable appearance model

Country Status (1)

Country Link
CN (1) CN1687957A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100349173C (en) * 2005-12-15 2007-11-14 上海交通大学 Method for searching new position of feature point using support vector processor multiclass classifier
CN100414562C (en) * 2006-10-10 2008-08-27 南京搜拍信息技术有限公司 Method for positioning feature points of human face in human face recognition system
CN100444191C (en) * 2006-11-08 2008-12-17 中山大学 Multiple expression whole face profile testing method based on moving shape model
CN101635028A (en) * 2009-06-01 2010-01-27 北京中星微电子有限公司 Image detecting method and image detecting device
CN101093542B (en) * 2006-02-15 2010-06-02 索尼株式会社 Inquiry system, imaging device, inquiry device, information processing method
CN101739438A (en) * 2008-11-04 2010-06-16 三星电子株式会社 System and method for sensing facial gesture
CN102291520A (en) * 2006-05-26 2011-12-21 佳能株式会社 Image processing method and image processing apparatus
CN101393599B (en) * 2007-09-19 2012-02-08 中国科学院自动化研究所 Game role control method based on human face expression
CN103049755A (en) * 2012-12-28 2013-04-17 合一网络技术(北京)有限公司 Method and device for realizing dynamic video mosaic
WO2014032496A1 (en) * 2012-08-28 2014-03-06 腾讯科技(深圳)有限公司 Method, device and storage medium for locating feature points on human face
CN103824087A (en) * 2012-11-16 2014-05-28 广州三星通信技术研究有限公司 Detection positioning method and system of face characteristic points
CN105069746A (en) * 2015-08-23 2015-11-18 杭州欣禾圣世科技有限公司 Video real-time human face substitution method and system based on partial affine and color transfer technology
CN105844252A (en) * 2016-04-01 2016-08-10 南昌大学 Face key part fatigue detection method
CN106022215A (en) * 2016-05-05 2016-10-12 北京海鑫科金高科技股份有限公司 Face feature point positioning method and device
CN106529397A (en) * 2016-09-21 2017-03-22 中国地质大学(武汉) Facial feature point positioning method and system in unconstrained environment
CN106778524A (en) * 2016-11-25 2017-05-31 努比亚技术有限公司 A kind of face value based on dual camera range finding estimates devices and methods therefor
CN108717527A (en) * 2018-05-15 2018-10-30 重庆邮电大学 Face alignment method based on posture priori
CN111931630A (en) * 2020-08-05 2020-11-13 重庆邮电大学 Dynamic expression recognition method based on facial feature point data enhancement

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100349173C (en) * 2005-12-15 2007-11-14 上海交通大学 Method for searching new position of feature point using support vector processor multiclass classifier
CN101093542B (en) * 2006-02-15 2010-06-02 索尼株式会社 Inquiry system, imaging device, inquiry device, information processing method
CN102291520B (en) * 2006-05-26 2017-04-12 佳能株式会社 Image processing method and image processing apparatus
CN102291520A (en) * 2006-05-26 2011-12-21 佳能株式会社 Image processing method and image processing apparatus
CN100414562C (en) * 2006-10-10 2008-08-27 南京搜拍信息技术有限公司 Method for positioning feature points of human face in human face recognition system
CN100444191C (en) * 2006-11-08 2008-12-17 中山大学 Multiple expression whole face profile testing method based on moving shape model
CN101393599B (en) * 2007-09-19 2012-02-08 中国科学院自动化研究所 Game role control method based on human face expression
CN101739438B (en) * 2008-11-04 2014-08-06 三星电子株式会社 System and method for sensing facial gesture
CN101739438A (en) * 2008-11-04 2010-06-16 三星电子株式会社 System and method for sensing facial gesture
CN101635028A (en) * 2009-06-01 2010-01-27 北京中星微电子有限公司 Image detecting method and image detecting device
WO2014032496A1 (en) * 2012-08-28 2014-03-06 腾讯科技(深圳)有限公司 Method, device and storage medium for locating feature points on human face
CN103824087A (en) * 2012-11-16 2014-05-28 广州三星通信技术研究有限公司 Detection positioning method and system of face characteristic points
CN103049755A (en) * 2012-12-28 2013-04-17 合一网络技术(北京)有限公司 Method and device for realizing dynamic video mosaic
CN103049755B (en) * 2012-12-28 2016-08-10 合一网络技术(北京)有限公司 A kind of method and device realizing dynamic video mosaic
CN105069746A (en) * 2015-08-23 2015-11-18 杭州欣禾圣世科技有限公司 Video real-time human face substitution method and system based on partial affine and color transfer technology
CN105069746B (en) * 2015-08-23 2018-02-16 杭州欣禾圣世科技有限公司 Video real-time face replacement method and its system based on local affine invariant and color transfer technology
CN105844252A (en) * 2016-04-01 2016-08-10 南昌大学 Face key part fatigue detection method
CN105844252B (en) * 2016-04-01 2019-07-26 南昌大学 A kind of fatigue detection method of face key position
CN106022215A (en) * 2016-05-05 2016-10-12 北京海鑫科金高科技股份有限公司 Face feature point positioning method and device
CN106022215B (en) * 2016-05-05 2019-05-03 北京海鑫科金高科技股份有限公司 Man face characteristic point positioning method and device
CN106529397A (en) * 2016-09-21 2017-03-22 中国地质大学(武汉) Facial feature point positioning method and system in unconstrained environment
CN106529397B (en) * 2016-09-21 2018-07-13 中国地质大学(武汉) A kind of man face characteristic point positioning method in unconstrained condition and system
CN106778524A (en) * 2016-11-25 2017-05-31 努比亚技术有限公司 A kind of face value based on dual camera range finding estimates devices and methods therefor
CN108717527A (en) * 2018-05-15 2018-10-30 重庆邮电大学 Face alignment method based on posture priori
CN111931630A (en) * 2020-08-05 2020-11-13 重庆邮电大学 Dynamic expression recognition method based on facial feature point data enhancement
CN111931630B (en) * 2020-08-05 2022-09-09 重庆邮电大学 Dynamic expression recognition method based on facial feature point data enhancement

Similar Documents

Publication Publication Date Title
CN1687957A (en) Man face characteristic point positioning method of combining local searching and movable appearance model
CN100382751C (en) Canthus and pupil location method based on VPP and improved SUSAN
CN105844252B (en) A kind of fatigue detection method of face key position
CN103577815B (en) A kind of face alignment method and system
CN108898125A (en) One kind being based on embedded human face identification and management system
CN101499128A (en) Three-dimensional human face action detecting and tracing method based on video stream
CN104408462B (en) Face feature point method for rapidly positioning
CN103456010A (en) Human face cartoon generation method based on feature point localization
CN101833654B (en) Sparse representation face identification method based on constrained sampling
CN110176016B (en) Virtual fitting method based on human body contour segmentation and skeleton recognition
CN1731416A (en) Method of quick and accurate human face feature point positioning
CN106097354B (en) A kind of hand images dividing method of combining adaptive Gauss Face Detection and region growing
CN112232332B (en) Non-contact palm detection method based on video sequence
CN101216882A (en) A method and device for positioning and tracking on corners of the eyes and mouths of human faces
CN102654903A (en) Face comparison method
CN107066969A (en) A kind of face identification method
CN104794441B (en) Human face characteristic positioning method based on active shape model and POEM texture models under complex background
CN104036299B (en) A kind of human eye contour tracing method based on local grain AAM
Irie et al. Improvements to facial contour detection by hierarchical fitting and regression
CN109325408A (en) A kind of gesture judging method and storage medium
CN105069745A (en) face-changing system based on common image sensor and enhanced augmented reality technology and method
CN112069986A (en) Machine vision tracking method and device for eye movements of old people
CN115205903A (en) Pedestrian re-identification method for generating confrontation network based on identity migration
CN108154176A (en) A kind of 3D human body attitude algorithm for estimating for single depth image
Chen et al. Fully automated facial symmetry axis detection in frontal color images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication