CN104331412A - Method for carrying out face retrieval in normalized three-dimension face database - Google Patents

Method for carrying out face retrieval in normalized three-dimension face database Download PDF

Info

Publication number
CN104331412A
CN104331412A CN201410490567.1A CN201410490567A CN104331412A CN 104331412 A CN104331412 A CN 104331412A CN 201410490567 A CN201410490567 A CN 201410490567A CN 104331412 A CN104331412 A CN 104331412A
Authority
CN
China
Prior art keywords
face
dimensional face
axis
feature
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410490567.1A
Other languages
Chinese (zh)
Other versions
CN104331412B (en
Inventor
孔德慧
王茹
王立春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201410490567.1A priority Critical patent/CN104331412B/en
Publication of CN104331412A publication Critical patent/CN104331412A/en
Application granted granted Critical
Publication of CN104331412B publication Critical patent/CN104331412B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole

Abstract

The invention discloses a method for carrying out face retrieval in a normalized three-dimension face database. The method has the advantage that human event observation and information processing modes can be effectively simulated for building a more efficient and simpler face retrieval method and system. The method comprises the following steps that (1) model normalization is carried out: three-dimension face data is subjected to preprocessing of smoothening, denoising, shearing, coordinate correction and aligning, an obtained three-dimension face sample is to be used as a normalized three-dimension face, and the following model coordinate system is defined by a corresponding cylinder enveloping surface: a center axis of a cylinder is used as the Z axis, the direction passing through the nose tip point and being vertical to the Z axis is used as the Y axis, and the X axis is obtained through the multiplication cross of the Y axis and the Z axis; (2) the visual saliency is used as the measurement principle for carrying out sample saliency region definition and feature extraction; (3) the single-feature identification degree is used as the basis for feature weighing overlapping, the multi-feature fusion inter-face feature similarity measurement is realized, and the face retrieval is realized.

Description

A kind of method of carrying out face retrieval in normalization three-dimensional face storehouse
Technical field
The invention belongs to the technical field of CRT technology, relate to a kind of method of carrying out face retrieval in normalization three-dimensional face storehouse particularly.
Background technology
Data retrieval is intended to concentrate to obtain from data-oriented reach the sample data of specifying and requiring with the degree of conformity of querying condition.Specific to the problem of three-dimensional face retrieval, studying a question of core is the character representation of three-dimensional face model and the human face similarity degree measure that is associated with this feature.
Existing three-dimensional face search method is mainly extracted based on the geometric properties (comprising statistical nature and transform domain feature) of faceform and is carried out identifying.Conventional geometric properties comprises: face global information, as 3D grid; Face local feature region, characteristic curve; Merge the multi-modal fusion feature etc. of face two dimension and three-dimensional information.But the mankind and computing machine are being observed and the difference in computing power, determine the mankind observing, compare, when identifying face, be more the differentiation utilizing the strong visual signature of the intuitives such as the profile of face, face, the colour of skin, hair style to carry out object, but not calculate feature by the geometry being difficult to direct vision.In other words, geometry that above-mentioned existing retrieval technique adopts calculates feature, does not meet the mankind by observing psychology, physiology course when identifying its similar facial characteristics, causes the instability of result for retrieval, unreliable.
In fact, the development of face retrieval method depends on mode, the cost of human face data collection and Data Representation mode to a great extent.The cost that the development of computer hardware technique not only makes face information gather reduces, speed, precision and stored number improve greatly, and multiple vision sensor (comprising the depth transducer occurred in recent years) can be utilized to gather three-dimensional face information, set up multimodal three-dimensional face model storehouse.This be carry out more efficiently, more reliable and more stable face retrieval technique study provides data basis.
Summary of the invention
Technology of the present invention is dealt with problems and is: overcome the deficiencies in the prior art, a kind of method of carrying out face retrieval in normalization three-dimensional face storehouse is provided, it effectively can observe things and information processing manner by simulating human, builds more efficient simple face retrieval method.
Technical solution of the present invention is: this method of carrying out face retrieval in normalization three-dimensional face storehouse, and the method comprises the following steps:
(1) model normalization: by three-dimensional face data after the pre-service of level and smooth, denoising, shearing, coordinate rectification, alignment, obtain three-dimensional face sample as normalized three-dimensional face, rely on the model coordinate systems that the right cylinder enveloping surface corresponding to it is defined as follows: this cylindrical central shaft is as Z axis, through prenasale and the direction vertical with Z axis as Y-axis, X-axis is obtained by Y-axis and Z axis multiplication cross;
(2) be that tolerance principle carries out the definition of sample marking area and feature extraction: the significance value being obtained each pixel in three-dimensional face sample by formula (1) with vision significance:
S(p i)=D(p i,p 1)+D(p i,p 2)+…+D(p i,p n) (1)
Wherein S (p i) be a p isignificance value, n is the number of pixel in image, D (... ...) represent attribute difference between two pixels;
(3) with single feature identification degree for according to carrying out characteristic weighing superposition, between the face realizing multiple features fusion, the similarity measurement of feature, realizes face retrieval.
Significance analysis method is applied to three-dimensional face searching field by the present invention, start with from the vision noticing mechanism of mankind itself, have studied definition and the feature interpretation retrieval in three-dimensional face to the salient region of significant effects, therefore effectively simulating human is observed things and information processing manner, is built more efficient simple face retrieval method.And compared with Euclidean distance, the similarity based on COS distance can consider the length of proper vector, the difference of aspect, direction two simultaneously, and be insensitive to the absolute figure of calculating, therefore be applicable to the tolerance of carrying out significant difference.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of this method.
Embodiment
As shown in Figure 1, this method of carrying out face retrieval in normalization three-dimensional face storehouse, the method comprises the following steps:
(1) model normalization: by three-dimensional face data after the pre-service of level and smooth, denoising, shearing, coordinate rectification, alignment, obtain three-dimensional face sample as normalized three-dimensional face, rely on the model coordinate systems that the right cylinder enveloping surface corresponding to it is defined as follows: this cylindrical central shaft is as Z axis, through prenasale and the direction vertical with Z axis as Y-axis, X-axis is obtained by Y-axis and Z axis multiplication cross;
(2) be that tolerance principle carries out the definition of sample marking area and feature extraction: the significance value being obtained each pixel in three-dimensional face sample by formula (1) with vision significance:
S(p i)=D(p i,p 1)+D(p i,p 2)+…+D(p i,p n) (1)
Wherein S (p i) be a p isignificance value, n is the number of pixel in image, D (... ...) represent attribute difference between two pixels;
(3) with single feature identification degree for according to carrying out characteristic weighing superposition, between the face realizing multiple features fusion, the similarity measurement of feature, realizes face retrieval.
Significance analysis method introduce a kind of with the degree of strength of human attention irriate for foundation, define and measure the method for the feature to be identified of observed object.The method is attracting the different manifestations in human visual attention, generation vision difference by weighing different scene content (object), qualitative and even quantitatively scenario objects is composed with different significance value, thus mark off different regions (feature); And priority processing is carried out to highly significant region, and region lower for other conspicuousness is ignored as insignificant region or ignored.This Object Selection processing mode, makes the method have higher treatment effeciency.
Significance analysis method is applied to three-dimensional face searching field by the present invention, start with from the vision noticing mechanism of mankind itself, have studied definition and the description retrieval in three-dimensional face to the salient region of significant effects, therefore effectively simulating human is observed things and information processing manner, is built more efficient simple face retrieval method.And compared with Euclidean distance, the similarity based on COS distance can consider the length of proper vector, the difference of aspect, direction two simultaneously, and be insensitive to the absolute figure of calculating, therefore be applicable to the tolerance of carrying out significant difference.
Preferably, the color respectively with three-dimensional face in described step (2), the degree of depth and the significance value of normal direction three Representative properties to pixel are analyzed:
The color of face is mainly with the black composition of the kermesinus of the colour of skin, lip and supercilium, eye.For the conspicuousness of outstanding three-dimensional face color detects, the significance value that the color attribute of three-dimensional face is corresponding is obtained by formula (2):
S(p i)=|RGB_ave-RGB_p i|/RGB_max (2)
Wherein, RGB_p ifor pixel p ired value component, RGB_max is the maximal value of all pixel redness, and RGB_ave is the mean value of all pixel red components; Along with the increase of threshold value, significance value is greater than specifies that " salient region " of threshold value shrinks gradually, in range set, and the supercilium of face sample, eye and mouth are marked as salient region all the time, represent that the significance value that these three regions obtain based on color approach is the highest, separating capacity is the strongest;
Similarly, the significance value that the depth attribute of three-dimensional face is corresponding is obtained by formula (3):
S(p i)=Deep_p i/Deep_max (3)
Wherein, Deep_p ifor any pixel p idepth value, Deep_max is the maximal value of the degree of depth, and namely prenasale is to the distance of Y-axis initial point; For strengthening depth difference, above formula can increase zoom factor, shows by experiment, and along with the increase of degree of depth significance value, salient region shrinks to prenasale gradually;
Smooth surface has continually varying normal direction, is less than to the stimulation of visual attention the surface, local " cusp or projection " that normal direction is discontinuous change.And there is direct correlation in the region that the normal direction of face points changes greatly and the vision significance region of face, therefore, herein by the method for the cusp on this search smooth surface or projection in order to determine the region that three-dimensional face normal direction changes greatly, namely comprise the salient region of the parts such as nose, ear, mouth.For this reason, first introduce the concept of " method of average to ", namely rough for face curved surface is regarded as smooth surface, then each point normal direction on curved surface is averaged, the method for average that can obtain whole curved surface to.The method of average can regard the method direction approximately perpendicular to three-dimensional face curved surface " smooth " part as to Normal_ave.This conclusion can by appoint get position coordinates vector that two pixels on non-significant region are formed and the method for average to dot product close to 0 the fact verified.This conclusion and derivation can be expressed as follows.If:
λ = Normal _ ave · ( v i → - v j → ) - - - ( 4 )
Wherein, with any point p in three-dimensional face images iand p jpoint coordinate in XYZ coordinate system.
As 2 p iand p jbelong to same smooth region, and their distance is enough little relative to the distance of local feature size, so their normal direction value approximately equal, now the vector of their compositions and the method for average of curved surface to Normal_ave product close to 0, i.e. λ ≈ 0.Otherwise then λ > ε.
Therefore, the significance value that the normal direction attribute of three-dimensional face is corresponding is obtained by formula (5):
S ( p i ) = Σ p i , p j ∈ E Normal _ ave · ( v i → - c j → ) - - - ( 5 )
Wherein, p i, p jrepresent the pixel on three-dimensional face sample, E represents three-dimensional face curved surface area, v i, v jx ix jp respectively i, p jpoint coordinate, Normal_ave be this curved surface area the method for average to.By experiment, can find out, the region significance value that normal direction " fluctuation " is larger is higher, as ear, nose and oral area." fluctuation " is less like this, then significance value is less can to regard approximate smooth region as with face curved surface for eye and supercilium.
The present invention carries out specializing definition to the significance value of pixel with the color of three-dimensional face, the degree of depth and normal direction three Representative properties respectively.And on this basis, in conjunction with the advantage of variant feature and measurement results thereof, propose three-dimensional face significance analysis method based on multi-feature fusion.Single feature comprises the limitation of information due to it, fully can not meet our needs for high precision identification.Herein each information is normalized, after equilibrium assignmen weight, adopts the multiple features fusion strategy based on method for weighted overlap-add to carry out Face datection.The weights used determine according to color, the degree of depth, the recognition accuracy of normal direction three kinds of features when identifying separately respectively, namely uses the independent discrimination of each feature as contribution rate during fusion recognition.Obtained the final distance of two three-dimensional face samples by formula (6) in described step (2):
E ( m , n ) = Σ i = 1 k ω i E ( m , n ) i - - - ( 6 )
Wherein, E (m, n) represents the final distance of three-dimensional face sample m and sample n, E (m, n) ifor sample m, the n distance under notable feature i, use the COS distance of single feature to measure, k represents used notable feature sum, weights ω irepresent the proportion size of notable feature i in final identifying, its value is directly proportional to single feature discrimination of this feature.
Provide now a specific embodiment of the present invention.
In BJUT-3D face database used herein, totally 126 samples, 42 people (man 34, female 8), everyone comprises 3 groups of real neutral three-dimensional face data.
First, 126 Zhang San of BJUT-3D database are tieed up Nature face sample and are divided into three groups by us, often organize 42 samples, are assigned to from same successively in each group by three three-dimensional models that scanning person obtains.These three groups of number consecutivelies are A group, B group, C group by we.Do test set respectively successively by A, B, C group, all the other two groups are done training set, calculate sample distance and discrimination.Finally obtain 3 groups of experimental identification rates are averaged, is final accuracy.Alignd by all samples normalization due to BJUT-3D face database, we carry out the subregion of eyebrow, eye, nose, ear, mouth to human face region, then utilize the significance value of trying to achieve to calculate the distance in each region successively.
Experimental result is as shown in the table:
Supercilium Eye Nose Ear Oral area All
Color 53.97% 56.03% 66.98% 68.25% 58.73% 88.10%
The degree of depth 25.40% 46.83% 57.94% 38.89% 48.41% 69.05%
Normal direction 36.51% 47.86% 57.94% 59.52% 49.21% 83.33%
Single feature recognition result accuracy of table 1 color, the degree of depth and normal direction
After the method according to formula (6) carries out Fusion Features, experimental result herein reaches the recognition accuracy of 91.27%.
Secondly, comprise 126 three-dimensional face sample databases be set to training set by whole.All samples individually as test set, are compared with training set successively, are used for calculating accuracy and recall rate.When accuracy is about 90% time, the recall rate of fusion method is close to 1.That is, when ensureing accuracy, correlated sampleses all in almost checkout system can be accomplished, reflect method in this paper good look into all-round power.
The above; it is only preferred embodiment of the present invention; not any pro forma restriction is done to the present invention, every above embodiment is done according to technical spirit of the present invention any simple modification, equivalent variations and modification, all still belong to the protection domain of technical solution of the present invention.

Claims (2)

1. in normalization three-dimensional face storehouse, carry out a method for face retrieval, it is characterized in that, the method comprises the following steps:
(1) model normalization: by three-dimensional face data after the pre-service of level and smooth, denoising, shearing, coordinate rectification, alignment, obtain three-dimensional face sample as normalized three-dimensional face, rely on the model coordinate systems that the right cylinder enveloping surface corresponding to it is defined as follows: this cylindrical central shaft is as Z axis, through prenasale and the direction vertical with Z axis as Y-axis, X-axis is obtained by Y-axis and Z axis multiplication cross;
(2) be that tolerance principle carries out the definition of sample marking area and feature extraction: the significance value being obtained each pixel in three-dimensional face sample by formula (1) with vision significance:
S(p i)=D(p i,p 1)+D(p i,p 2)+…+D(p i,p n) (1)
Wherein S (p i) be a p isignificance value, n is the number of pixel in image, D (... ...) represent attribute difference between two pixels;
(3) with single feature identification degree for according to carrying out characteristic weighing superposition, between the face realizing multiple features fusion, the similarity measurement of feature, realizes face retrieval.
2. method of carrying out face retrieval in normalization three-dimensional face storehouse according to claim 1, it is characterized in that, the significance value of the color respectively with three-dimensional face in described step (2), the degree of depth and normal direction three Representative properties to pixel is analyzed: the significance value that the color attribute of three-dimensional face is corresponding is obtained by formula (2):
S(p i)=|RGB_ave-RGB_p i|/RGB_max (2)
Wherein, RGB_p ifor pixel p ired value component, RGB_max is the maximal value of all pixel redness, and RGB_ave is the mean value of all pixel red components;
The significance value that the depth attribute of three-dimensional face is corresponding is obtained by formula (3):
S(p i)=Deep_p i/Deep_max (3)
Wherein, Deep_p ifor any pixel p idepth value, Deep_max is the maximal value of the degree of depth;
The significance value that the normal direction attribute of three-dimensional face is corresponding is obtained by formula (5):
S ( p i ) = Σ p i , p j ∈ E Normal _ ave · ( v i → - c j → ) - - - ( 5 ) Wherein, p i, p jrepresent the pixel on three-dimensional face sample, E represents three-dimensional face curved surface area, v i, v jp respectively i, p jpoint coordinate, Normal_ave be this curved surface area the method for average to.Method of carrying out face retrieval in normalization three-dimensional face storehouse according to claim 1, it is characterized in that, based on multiple features fusion strategy in described step (3), with single feature discrimination for feature weight, obtained the final distance of two three-dimensional face samples by formula (6):
E ( m , n ) = Σ i = 1 k ω i E ( m , n ) i - - - ( 6 ) Wherein, E (m, n) represents the final distance of three-dimensional face sample m and sample n, E (m, n) ifor sample m, the n distance under notable feature i, use the COS distance of single feature to measure, k represents used notable feature sum, weights ω irepresent the proportion size of notable feature i in final similarity measurement process, its value is directly proportional to single feature discrimination of this feature.
CN201410490567.1A 2014-09-23 2014-09-23 A kind of method that face retrieval is carried out in normalization three-dimensional face storehouse Expired - Fee Related CN104331412B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410490567.1A CN104331412B (en) 2014-09-23 2014-09-23 A kind of method that face retrieval is carried out in normalization three-dimensional face storehouse

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410490567.1A CN104331412B (en) 2014-09-23 2014-09-23 A kind of method that face retrieval is carried out in normalization three-dimensional face storehouse

Publications (2)

Publication Number Publication Date
CN104331412A true CN104331412A (en) 2015-02-04
CN104331412B CN104331412B (en) 2018-03-09

Family

ID=52406139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410490567.1A Expired - Fee Related CN104331412B (en) 2014-09-23 2014-09-23 A kind of method that face retrieval is carried out in normalization three-dimensional face storehouse

Country Status (1)

Country Link
CN (1) CN104331412B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104765768A (en) * 2015-03-09 2015-07-08 深圳云天励飞技术有限公司 Mass face database rapid and accurate retrieval method
CN105447446A (en) * 2015-11-12 2016-03-30 易程(苏州)电子科技股份有限公司 Face recognition method and system based on principal component of rough set
CN107248138A (en) * 2017-06-16 2017-10-13 中国科学技术大学 Human vision conspicuousness Forecasting Methodology in reality environment
WO2019169884A1 (en) * 2018-03-09 2019-09-12 北京大学深圳研究生院 Image saliency detection method and device based on depth information
CN110909755A (en) * 2018-09-17 2020-03-24 阿里巴巴集团控股有限公司 Object feature processing method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120045095A1 (en) * 2010-08-18 2012-02-23 Canon Kabushiki Kaisha Image processing apparatus, method thereof, program, and image capturing apparatus
CN103810503A (en) * 2013-12-26 2014-05-21 西北工业大学 Depth study based method for detecting salient regions in natural image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120045095A1 (en) * 2010-08-18 2012-02-23 Canon Kabushiki Kaisha Image processing apparatus, method thereof, program, and image capturing apparatus
CN103810503A (en) * 2013-12-26 2014-05-21 西北工业大学 Depth study based method for detecting salient regions in natural image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MING-MING CHENG等: "Global Contrast based Salient Region Detection", 《IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
尹宝才等: "BJUT-3D三维人脸数据库及其处理技术", 《计算机研究与发展》 *
陈龙: "基于显著性分析和多特征融合的图像检索算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104765768A (en) * 2015-03-09 2015-07-08 深圳云天励飞技术有限公司 Mass face database rapid and accurate retrieval method
CN105447446A (en) * 2015-11-12 2016-03-30 易程(苏州)电子科技股份有限公司 Face recognition method and system based on principal component of rough set
CN107248138A (en) * 2017-06-16 2017-10-13 中国科学技术大学 Human vision conspicuousness Forecasting Methodology in reality environment
CN107248138B (en) * 2017-06-16 2020-01-03 中国科学技术大学 Method for predicting human visual saliency in virtual reality environment
WO2019169884A1 (en) * 2018-03-09 2019-09-12 北京大学深圳研究生院 Image saliency detection method and device based on depth information
CN110909755A (en) * 2018-09-17 2020-03-24 阿里巴巴集团控股有限公司 Object feature processing method and device
CN110909755B (en) * 2018-09-17 2023-05-30 阿里巴巴集团控股有限公司 Object feature processing method and device

Also Published As

Publication number Publication date
CN104331412B (en) 2018-03-09

Similar Documents

Publication Publication Date Title
CN108549873B (en) Three-dimensional face recognition method and three-dimensional face recognition system
WO2017219391A1 (en) Face recognition system based on three-dimensional data
Lemaire et al. Fully automatic 3D facial expression recognition using differential mean curvature maps and histograms of oriented gradients
CN106778468B (en) 3D face identification method and equipment
CN104850825A (en) Facial image face score calculating method based on convolutional neural network
CN104331412A (en) Method for carrying out face retrieval in normalized three-dimension face database
Vretos et al. 3D facial expression recognition using Zernike moments on depth images
CN108629336A (en) Face value calculating method based on human face characteristic point identification
CN107066969A (en) A kind of face identification method
CN103927511A (en) Image identification method based on difference feature description
Linder et al. Real-time full-body human gender recognition in (RGB)-D data
CN106778489A (en) The method for building up and equipment of face 3D characteristic identity information banks
CN108052912A (en) A kind of three-dimensional face image recognition methods based on square Fourier descriptor
CN106446773A (en) Automatic robust three-dimensional face detection method
CN105320948A (en) Image based gender identification method, apparatus and system
CN105975906B (en) A kind of PCA static gesture identification methods based on area features
Arif et al. Human pose estimation and object interaction for sports behaviour
CN106778491B (en) The acquisition methods and equipment of face 3D characteristic information
CN106682575A (en) Human eye point cloud feature location with ELM (Eye Landmark Model) algorithm
CN103714340A (en) Self-adaptation feature extracting method based on image partitioning
CN103745204A (en) Method of comparing physical characteristics based on nevus spilus points
CN105574535B (en) Graphic symbol recognition method based on indirect distance angle histogram space relational model
Vezzetti et al. Application of geometry to rgb images for facial landmark localisation-a preliminary approach
CN104951767A (en) Three-dimensional face recognition technology based on correlation degree
CN106406507A (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180309

Termination date: 20210923