CN101561875A - Method for positioning two-dimensional face images - Google Patents

Method for positioning two-dimensional face images Download PDF

Info

Publication number
CN101561875A
CN101561875A CNA2009101433254A CN200910143325A CN101561875A CN 101561875 A CN101561875 A CN 101561875A CN A2009101433254 A CNA2009101433254 A CN A2009101433254A CN 200910143325 A CN200910143325 A CN 200910143325A CN 101561875 A CN101561875 A CN 101561875A
Authority
CN
China
Prior art keywords
human face
dimension human
shape
face image
dimensional face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2009101433254A
Other languages
Chinese (zh)
Other versions
CN101561875B (en
Inventor
丁晓青
方驰
王丽婷
丁镠
刘长松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN2009101433254A priority Critical patent/CN101561875B/en
Publication of CN101561875A publication Critical patent/CN101561875A/en
Application granted granted Critical
Publication of CN101561875B publication Critical patent/CN101561875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the invention discloses a method for positioning two-dimensional face images, pertaining to the field of computer sight and model recognition. The method comprises the steps of: acquiring two-dimensional face images in a preset database; constructing two-dimensional face shape models by using the two-dimensional face images in the database; constructing two-dimensional face local texture models by using the two-dimensional face images in the database; and positioning the two-dimensional face images according to the two-dimensional face shape models and the two-dimensional face local texture models. The method establishes the two-dimensional face shape models and two-dimensional face local texture models by utilizing the preset database, thus realizing the accurate positioning of the two-dimensional face images, and utilizes the method of combining use point comparison features and feature choosing upon establishing models of local texture, thus improving the computation speed and the positioning effect of feature points.

Description

A kind of method of positioning two-dimensional face
Technical field
The present invention relates to computer vision and area of pattern recognition, particularly a kind of method of positioning two-dimensional face.
Background technology
Face identification system is a core with the face recognition technology, is an emerging biological identification technology, is the high-quality precision and sophisticated technology of current International Technology field tackling key problem.The cooperation that people's face is not reproducible because of having, collection is convenient, do not need the one be shooted makes face identification system have widely and uses.
A location that key issue is a facial image of recognition of face.Usually in the process of recognition of face, need earlier facial image to be positioned, further carry out recognition of face according to positioning result then.
Face identification method also has a series of insoluble problems, and for example when bigger variation takes place for human face posture, expression and ambient lighting (PIE, Pose Illumination Expression), discrimination will sharply descend.How to solve the identification problem of people's face under different attitudes, illumination and expression condition, remain the focus of current research.Recognition of face problem for attitude and illumination variation uses conventional methods, and must obtain people's face training image of being used to learn under abundant different attitudes and the illumination condition, yet under many circumstances, these images also are not easy to obtain.In order to realize not relying on the recognition of face of attitude and ambient lighting, following method is proposed in the prior art:
The first kind is an attitude invariant features extracting method, and these class methods solve the identification problem that attitude changes by extracting the feature that can overcome the attitude variation; Second class is based on the solution of various visual angles facial image, such as the conventional subspace method being expanded to the various visual angles subspace; The 3rd class is based on the method for human face three-dimensional model, and after Blanz proposed the three-dimensional face modeling method, the method that generates people's each attitude virtual images of face (Virtual Image) based on human face three-dimensional model had obtained achievement preferably in solving the attitude problem.
But also there is a lot of shortcomings in prior art, and the major defect of attitude invariant features extracting method is to extract relatively difficulty of the constant feature of attitude; Based on the solution of various visual angles facial image, its major defect is to be difficult to the attitude of people's face is definitely demarcated, and wrong attitude is estimated to reduce the recognition of face performance; And based on the method for human face three-dimensional model, though can solve the attitude problem preferably, also there are a lot of difficulties, big such as calculated amount, speed slow and recover low precision, and need manual location feature point to be used for initialization, these all are the location of facial image and further discern and brought difficulty.
Summary of the invention
In order to realize robotization and recognition of face fast and accurately, the embodiment of the invention provides a kind of method of positioning two-dimensional face.Described technical scheme is as follows:
A kind of method of positioning two-dimensional face is characterized in that, comprising:
Obtain the two-dimension human face image in the preset database;
Utilize the two-dimension human face image in the described database, make up the two-dimension human face shape;
Utilize the two-dimension human face image in the described database, make up the two-dimension human face local texture model;
According to described two-dimension human face shape and two-dimension human face local texture model, described two-dimension human face image is positioned.
The embodiment of the invention is set up two-dimension human face shape and two-dimension human face local texture model by utilizing preset database, realized two-dimension human face image is accurately located, the method of using some contrast characteristic and feature selecting to combine when the local grain modeling has improved the locating effect of computing velocity and unique point.
Description of drawings
The method flow diagram of a kind of positioning two-dimensional face that Fig. 1 provides for the embodiment of the invention 1;
The attitude two-dimension human face shape left that Fig. 2 provides for the embodiment of the invention 1;
The two-dimension human face shape in the front that Fig. 3 provides for the embodiment of the invention 1.
Embodiment
For making the purpose, technical solutions and advantages of the present invention clearer, embodiment of the present invention is described further in detail below in conjunction with accompanying drawing.
Embodiment 1
The embodiment of the invention provides a kind of method of positioning two-dimensional face.This method is carried out the shape modeling of many subspaces to the two-dimension human face image in the database, obtains the two-dimension human face shape; Two-dimension human face image is carried out the local grain modeling, obtain the two-dimension human face local texture model; According to two-dimension human face shape and local texture model, two-dimension human face image is accurately located.As shown in Figure 1, present embodiment comprises:
101: obtain the two-dimension human face image in the preset database;
Two-dimension human face database in the present embodiment is taken from 2000 Europeans and Asian two-dimension human face image, and the data of each two-dimension human face comprise: and data texturing (R, G, B) and data such as the attitude of people's face, expression and illumination variation.
102: the two-dimension human face image in the database is carried out the shape modeling of many subspaces, obtain the two-dimension human face shape.
Set up the two-dimension human face shape according to the two-dimension human face database, concrete steps comprise:
102a: the two-dimension human face image in the database is divided according to attitude; The facial image of every kind of attitude is carried out the demarcation of unique point, obtain the characteristic point coordinates value; Utilize the characteristic point coordinates value to make up the shape vector of the two-dimension human face image under the corresponding attitude.
Concrete, two-dimension human face image is divided into left according to attitude, to the right, upwards, downwards with positive five kinds, with attitude facial image left is example, in the tentation data storehouse the total N of attitude two-dimension human face data left, and the individual unique point of 88 (also can be 88 beyond numerical value) of demarcating everyone face of this attitude, obtain characteristic point coordinates (x, y) as raw data, and raw data carried out quantification treatment, obtain the shape vector of people's face.
Wherein, the method of feature point for calibration can have multiple, common method is manual mark method, present embodiment adopts semi-automatic interactively manual mask method, semi-automatic mark is different from manual mark, need not mark each point is all manual, but by modes such as pullings, demarcate the unique point of people's face, can use relevant software to realize.
Constitute the shape vector of people's face according to 88 characteristic point coordinates:
X i=[x i0,y i0,x i1,y i1…x ij,y ij…x i87,y i87] T (6)
102b: shape vector is carried out the normalization of center, yardstick and direction.
When carrying out the normalized of facial image, partly be that reference point carries out normalized with the eyes in the image usually.Concrete, utilize following formula to carry out center normalization:
x ‾ i = 1 m Σ j = 1 m x ij y ‾ i = 1 m Σ j = 1 m y ij x ij = x ij - x ‾ i y ij = y ij - y ‾ i ∀ j = 1 . . . m - - - ( 7 )
Utilize following formula to carry out yardstick normalization:
| | S i ′ | | = Σ j = 1 m ( x ij ′ 2 + y ij ′ 2 ) x ij ′ ′ = x ij ′ / | | S i ′ | | y ij ′ ′ = y ij ′ / | | S i ′ | | ∀ j = 1 . . . m - - - ( 8 )
Utilize the normalization of Procrust Analysis algorithm travel direction, rotation in the plane of elimination people face.
102c: all shape vectors after the normalization are carried out principal component analysis, make up the shape of corresponding attitude according to the principal component analysis result; Shape by all attitudes makes up the two-dimension human face shape.
Shape vector to attitude two-dimension human face data left carries out principal component analysis, and is specific as follows:
1) the shape vector average and the covariance matrix of calculating two-dimension human face data.
Concrete, calculate the shape vector average and utilize following formula: X ‾ = 1 N Σ i = 1 N X i - - - ( 9 )
Calculate covariance matrix and utilize following formula: C x = 1 N Σ i = 1 N ( X i - X ‾ ) ( X i - X ‾ ) T - - - ( 10 )
2) make up the shape of corresponding attitude according to the principal component analysis result, make up the two-dimension human face shape by the shape of all attitudes.Specific as follows:
Obtain proper vector P according to shape vector average and covariance matrix, make up the shape of attitude two-dimension human face left: X=X+Pb, wherein, b is the form parameter of principal component analysis (PCA, Principal component analysis).
Concrete, as shown in Figure 2, be that example describes with the shape of attitude facial image left, by being set, different form parameter b can obtain different shapes, and make shape have certain variation range.
Accordingly, be illustrated in figure 3 as the shape of front face.
Facial image to all attitudes carries out shape modeling respectively, obtains the shape of all attitudes, and the shape modeling method is the same, repeats no more.
Further, any one people's face shape X can be expressed as: X=T a(X+Pb).Wherein a is a geometric parameter, comprises the translation vector X of level, vertical direction i, Y t, scaling vector S and angle vector θ.Ta represents how much variations of shape, as shown in the formula:
a=(X t,Y t,s,θ); T X t , Y t , s , θ x y = X t Y t + s cos θ - s sin θ s sin θ s cos θ x y - - - ( 11 )
Again step, comprehensively can get the two-dimension human face shape by the shape of all attitudes.For example, use M i, i=1,2,3,4,5, distinguish correspondence left, to the right, with positive five kinds of attitude models, i is an attitude parameter, for every kind of attitude model M up and down i, its mean vector is expressed as X i, the proper vector of principal component analysis is P i, the two-dimension human face shape that comprehensively obtains is: X = T a i ( X ‾ i + P i b i ) .
103: two-dimension human face image is carried out the local grain modeling, obtain the two-dimension human face local texture model.Specifically comprise:
Use the duscriminant learning method in the present embodiment, analyze each unique point texture and near other on every side and put the difference of texture on every side, method with identification solves the orientation problem of unique point, uses point that feature is relatively combined with the feature selection approach of random forest and carries out the description of local grain.
Concrete, the location feature that the embodiment of the invention proposes is that point is to comparing feature, the i.e. comparison of any two picture element gray scale sizes in the image.The modeling of present embodiment local grain is to be sorter of each unique point design, and whole people's face need design 88 sorters altogether.With the left eye angle is example, chooses to select two some p1 arbitrarily in the preset range, and p2 compares, and is concrete, and preset range can be 5 * 5 coordinate range, and with the gray-scale value of I (p) remarked pixel point, then the mathematical formulae of classifier result can be expressed as follows:
h n = 1 ifI ( p 1 ) ≥ I ( p 2 ) 0 otherwise - - - ( 12 )
Promptly when I (p1) 〉=I (p2), the result of Weak Classifier is 1, otherwise the Weak Classifier result is 0.For the image block of one 32 * 32 size, choosing two points arbitrarily has C 1024 2Plant combination, the Weak Classifier total number is about 520,000.
Selected point only need be taken up an official post in original-gray image to feature relatively and be got 2 relatively sizes of gray-scale values, does not need to carry out computings such as various conversion and multiplication and division, evolution, and therefore this feature has stablely, calculates characteristics fast.Secondly, point is clearer and more definite to the geometric position of comparing Feature Selection point, aspect the location of unique point, than Gabor feature, gradient feature or Haar feature etc. in the prior art better performance is arranged.
But because some contrast characteristic number is a lot, thus the feature selection approach that must be combined, what present embodiment used is the random forest method, its basic thought is that a lot of Weak Classifiers are integrated into a strong classifier.A random forest is made of N decision tree, and every decision tree (as decision tree T1 T2...TN) is a decision tree classification device, and each node of decision tree all is a Weak Classifier, and the result of decision of random forest is the average of all decision tree classification results.In training process, the difference of every decision tree in the random forest is the training sample set, is respectively a subclass concentrating picked at random from total sample; And the training method of every decision tree is identical, and decision tree is all chosen the best Weak Classifier of current classifying quality at each node.In assorting process, classification problem with a C class is an example, the C class is promptly exported C degree of confidence, each degree of confidence p (n, p) (f (p)=c) has represented that a sample p belongs to the probability of C class, sample p has C output result by each decision tree classification device Tn, and the judgement of last random forest is average based on all decision tree results', is shown below.
Figure A20091014332500081
104:, two-dimension human face image is accurately located according to two-dimension human face shape and local texture model.
Concrete, to the shape of each two-dimension human face image X = T a i ( X ‾ i + P i b i ) Be optimized, obtain optimum attitude model M i, and the geometric parameter a of the optimum under this attitude model iWith form parameter b iThereby, obtain the shape of the optimum of this two-dimension human face image, obtained the optimum shape model, just realized accurate location to two-dimension human face image.Concrete grammar is as follows:
Objective function according to traditional parameter optimization algorithm:
( a ^ , b ^ ) = min a , b | | Y - T a ( X ‾ + Pb ) | | 2 = min a , b ( Y - T a ( X ‾ + Pb ) ) T ( Y - T a ( X ‾ + Pb ) ) - - - ( 14 )
Add attitude parameter i, optimized Algorithm is improved, the objective function of the optimized Algorithm that present embodiment proposes is:
Figure A20091014332500084
The objective function (15) of the optimized Algorithm that present embodiment proposes has be different from traditional objective function (14) at 3, and at first, objective function (15) is a matrix W with the result of each random forest sorter output iJoin among the optimization aim, i.e. i attitude model M iThe result that the random forest sorter obtains.Secondly, add form parameter and drop on this restriction of zone of compacting in the model parameter space of shape principal component analysis, add limit entry Limit the form parameter b of principal component analysis iAt last, the two-dimensional shapes model is optimized, according to the two-dimensional shapes model M of optimum i, two-dimension human face image is accurately located.By the optimization aim function, the model parameter that can make optimization is more near expectation value.
Further, the execution in step of the optimized Algorithm of the model parameter of present embodiment proposition is as follows:
1) to all attitude model M i, { 1,2,3,4,5} carries out initialization to i ∈, positions by the two-dimension human face figure of the part of the eyes in the facial image to different attitudes, and obtains corresponding geometric parameter a iWith form parameter b i
2) feature of choosing is optimized, the point of the random forest sorter output probability maximum in the unique point preset range of selected shape model Central Plains is as new unique point.Concrete, preset range can be chosen 5 * 5 coordinate range.
3) geometric parameter of optimization attitude: a ^ i = min a i ( Y - T a i ( X ‾ i + P i b i ) ) T W i ( Y - T a i ( X ‾ i + P i b i ) ) . - - - ( 16 )
4) optimised shape parameter: b ^ i = min b i ( Y - T a ^ i ( X ‾ i + P i b i ) ) T W i ( Y - T a ^ i ( X ‾ i + P i b i ) ) + Σ j = 1 t b ij 2 / σ j 2 . - - - ( 17 )
5) if | | a ^ i - a i | | + | | b ^ i - b i | | < &epsiv; , Then stop to optimize computing; Otherwise, order a i = a ^ i ; b i = b ^ i , Return step 2).
6) the optimal characteristics point location result of more every kind of attitude model chooses and makes the minimized result of formula (15) as optimal result, obtains optimum attitude i and corresponding a iAnd b i
Make up optimum people's face shape model according to optimum parameters, realize accurate location each two-dimension human face image.
The embodiment of the invention is set up two-dimension human face shape and two-dimension human face local texture model by utilizing preset database, realized two-dimension human face image is accurately located, the method of using some contrast characteristic and feature selecting to combine when the local grain modeling has improved the locating effect of computing velocity and unique point.
Above-described embodiment, the present invention embodiment a kind of more preferably just, the common variation that those skilled in the art carries out in the technical solution of the present invention scope and replacing all should be included in protection scope of the present invention.

Claims (7)

1, a kind of method of positioning two-dimensional face is characterized in that, comprising:
Obtain the two-dimension human face image in the preset database;
Utilize the two-dimension human face image in the described database, make up the two-dimension human face shape;
Utilize the two-dimension human face image in the described database, make up the two-dimension human face local texture model;
According to described two-dimension human face shape and two-dimension human face local texture model, described two-dimension human face image is positioned.
2, the method for positioning two-dimensional face as claimed in claim 1 is characterized in that, the described two-dimension human face image that utilizes in the described database makes up the two-dimension human face shape, comprising:
Two-dimension human face image in the described database is carried out the shape modeling of many subspaces, obtain the two-dimension human face shape.
3, the method for positioning two-dimensional face as claimed in claim 1 is characterized in that, utilizes the two-dimension human face image in the described database, makes up the two-dimension human face shape, comprising:
Two-dimension human face image in the described database is carried out the local grain modeling, obtain the two-dimension human face local texture model.
4, the method for positioning two-dimensional face as claimed in claim 2 is characterized in that, described two-dimension human face image in the described database is carried out the shape modeling of many subspaces, obtains the two-dimension human face shape, comprising:
Two-dimension human face image in the described database is divided according to attitude;
The facial image of every kind of attitude is carried out the demarcation of unique point, obtain described characteristic point coordinates value;
Utilize described characteristic point coordinates value to make up the shape vector of the two-dimension human face image under the corresponding attitude;
Described shape vector is carried out normalized processing, obtain shape vector through normalized;
Described shape vector through normalized is carried out principal component analysis, make up the shape of corresponding attitude according to the principal component analysis result;
Described shape by all attitudes makes up the two-dimension human face shape.
5, the method for positioning two-dimensional face as claimed in claim 3 is characterized in that, described two-dimension human face image in the described database is carried out the local grain modeling, obtains the two-dimension human face local texture model, comprising:
Obtain the unique point coordinate figure on the described two-dimension human face image;
Two pixel gray scale sizes in the preset range of described two-dimension human face image unique point are compared, obtain a contrast characteristic;
The use characteristic system of selection is selected to handle to described some contrast characteristic, obtains selecting result;
Make up the two-dimension human face local texture model according to described selection result.
6, the method for positioning two-dimensional face as claimed in claim 5 is characterized in that, described use characteristic system of selection is selected to handle to described some contrast characteristic, obtains selecting result, comprising:
Use random forest feature selecting method that described some contrast characteristic selected to handle, obtain selecting result.
7, the method for positioning two-dimensional face as claimed in claim 1 is characterized in that, describedly according to described two-dimension human face shape and two-dimension human face local texture model described two-dimension human face image is positioned, and comprising:
According to preset algorithm the shape of described two-dimension human face image to be identified is optimized processing, obtains optimum attitude parameter, geometric parameter and form parameter;
Utilize attitude parameter, geometric parameter and the form parameter of described optimum, make up the optimum shape model of described two-dimension human face image to be identified;
Utilize described optimum shape model and described two-dimension human face local texture model, described two-dimension human face image to be identified is accurately located.
CN2009101433254A 2008-07-17 2008-07-17 Method for positioning two-dimensional face images Active CN101561875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009101433254A CN101561875B (en) 2008-07-17 2008-07-17 Method for positioning two-dimensional face images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009101433254A CN101561875B (en) 2008-07-17 2008-07-17 Method for positioning two-dimensional face images

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN2008101167815A Division CN101320484B (en) 2008-07-17 2008-07-17 Three-dimensional human face recognition method based on human face full-automatic positioning

Publications (2)

Publication Number Publication Date
CN101561875A true CN101561875A (en) 2009-10-21
CN101561875B CN101561875B (en) 2012-05-30

Family

ID=41220672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009101433254A Active CN101561875B (en) 2008-07-17 2008-07-17 Method for positioning two-dimensional face images

Country Status (1)

Country Link
CN (1) CN101561875B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011079458A1 (en) * 2009-12-31 2011-07-07 Nokia Corporation Method and apparatus for local binary pattern based facial feature localization
CN104036255A (en) * 2014-06-21 2014-09-10 电子科技大学 Facial expression recognition method
CN104091148A (en) * 2014-06-16 2014-10-08 联想(北京)有限公司 Facial feature point positioning method and device
CN104598873A (en) * 2014-12-24 2015-05-06 苏州福丰科技有限公司 Three-dimensional face recognition method of door lock
CN111299815A (en) * 2020-02-13 2020-06-19 西安交通大学 Visual detection and laser cutting trajectory planning method for low-gray rubber pad

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100375108C (en) * 2006-03-02 2008-03-12 复旦大学 Automatic positioning method for characteristic point of human faces
CN100444190C (en) * 2006-10-30 2008-12-17 邹采荣 Human face characteristic positioning method based on weighting active shape building module

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011079458A1 (en) * 2009-12-31 2011-07-07 Nokia Corporation Method and apparatus for local binary pattern based facial feature localization
CN102640168A (en) * 2009-12-31 2012-08-15 诺基亚公司 Method and apparatus for local binary pattern based facial feature localization
US8917911B2 (en) 2009-12-31 2014-12-23 Nokia Corporation Method and apparatus for local binary pattern based facial feature localization
CN102640168B (en) * 2009-12-31 2016-08-03 诺基亚技术有限公司 Method and apparatus for facial Feature Localization based on local binary pattern
CN104091148A (en) * 2014-06-16 2014-10-08 联想(北京)有限公司 Facial feature point positioning method and device
CN104091148B (en) * 2014-06-16 2017-06-27 联想(北京)有限公司 A kind of man face characteristic point positioning method and device
CN104036255A (en) * 2014-06-21 2014-09-10 电子科技大学 Facial expression recognition method
CN104036255B (en) * 2014-06-21 2017-07-07 电子科技大学 A kind of facial expression recognizing method
CN104598873A (en) * 2014-12-24 2015-05-06 苏州福丰科技有限公司 Three-dimensional face recognition method of door lock
CN111299815A (en) * 2020-02-13 2020-06-19 西安交通大学 Visual detection and laser cutting trajectory planning method for low-gray rubber pad
CN111299815B (en) * 2020-02-13 2021-02-09 西安交通大学 Visual detection and laser cutting trajectory planning method for low-gray rubber pad

Also Published As

Publication number Publication date
CN101561875B (en) 2012-05-30

Similar Documents

Publication Publication Date Title
CN101320484B (en) Three-dimensional human face recognition method based on human face full-automatic positioning
CN101561874B (en) Method for recognizing face images
US10929649B2 (en) Multi-pose face feature point detection method based on cascade regression
CN102663413B (en) Multi-gesture and cross-age oriented face image authentication method
CN106228185B (en) A kind of general image classifying and identifying system neural network based and method
CN104063719B (en) Pedestrian detection method and device based on depth convolutional network
CN102799901B (en) Method for multi-angle face detection
CN103870811B (en) A kind of front face Quick method for video monitoring
CN105956560A (en) Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN106096538A (en) Face identification method based on sequencing neural network model and device
CN103942577A (en) Identity identification method based on self-established sample library and composite characters in video monitoring
CN103295025B (en) A kind of automatic selecting method of three-dimensional model optimal view
CN104794441B (en) Human face characteristic positioning method based on active shape model and POEM texture models under complex background
CN103258204A (en) Automatic micro-expression recognition method based on Gabor features and edge orientation histogram (EOH) features
CN107092931B (en) Method for identifying dairy cow individuals
CN104700076A (en) Face image virtual sample generating method
CN105243139A (en) Deep learning based three-dimensional model retrieval method and retrieval device thereof
CN101840509A (en) Measuring method for eye-observation visual angle and device thereof
CN100373395C (en) Human face recognition method based on human face statistics
CN101561875B (en) Method for positioning two-dimensional face images
CN104598885A (en) Method for detecting and locating text sign in street view image
CN103440510A (en) Method for positioning characteristic points in facial image
CN106503748A (en) A kind of based on S SIFT features and the vehicle targets of SVM training aids
CN104050460B (en) The pedestrian detection method of multiple features fusion
CN103020614A (en) Human movement identification method based on spatio-temporal interest point detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant