CN103294998B - A kind of face visual feature representation method based on attribute space - Google Patents

A kind of face visual feature representation method based on attribute space Download PDF

Info

Publication number
CN103294998B
CN103294998B CN201310192441.1A CN201310192441A CN103294998B CN 103294998 B CN103294998 B CN 103294998B CN 201310192441 A CN201310192441 A CN 201310192441A CN 103294998 B CN103294998 B CN 103294998B
Authority
CN
China
Prior art keywords
face
data
attribute
feature
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310192441.1A
Other languages
Chinese (zh)
Other versions
CN103294998A (en
Inventor
陈雁翔
董绪文
刘盛中
龙润田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN201310192441.1A priority Critical patent/CN103294998B/en
Publication of CN103294998A publication Critical patent/CN103294998A/en
Application granted granted Critical
Publication of CN103294998B publication Critical patent/CN103294998B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of face visual feature representation method based on attribute space, the method comprises: carry out digital simulation, pre-service and data evaluation and test to 64 kinds of face overall situations or local attribute, extract " bottom " feature of data; Utilize gauss hybrid models to estimate attribute data distribution, form face character space; Input human face data is projected in attribute space, obtains the face visualization feature quantized; By Z-score method, standardization is carried out to visualization feature, can more effective input data be identified.The inventive method can obtain the feature differentiated needed for face more accurately, can be widely used in other pattern-recognition and advanced intelligent domain model.

Description

A kind of face visual feature representation method based on attribute space
Technical field
The invention belongs to image procossing and mode identification technology, relate to a kind of face visual feature representation method based on attribute space.
Background technology
Along with the fast development of computing machine and image processing techniques, public safety problem receives that society is increasing to be paid close attention to, and various identity recognizing technology emerges in an endless stream.Recognition of face makes it have huge advantage with its feature such as direct, convenient, friendly, hidden.Recognition of face relates to pattern-recognition, computer vision, intelligent man-machine interactively, computer graphics, multiple subject such as cognitive science.As living things feature recognition gordian technique, recognition of face has good application prospect in fields such as information security, security protection, finance.
People have done a lot of good try in field of face identification and have achieved great successes, but utilize computing machine to carry out automatically, efficiently recognition of face still has many difficulties, main manifestations is: face is complicated, changes various, facial image and can be subject to comprising illumination, expression, attitude, shooting the visual angle even impact of the factor such as camera specification.
Summary of the invention
The object of the invention is to propose a kind of method studying face visualization feature in face character space, to excavate the effect of face visualization feature to recognition of face.
The technical solution used in the present invention is:
The invention provides a kind of face visual feature representation method based on attribute space, comprise the following steps:
Step 1: face visualization feature is counted as face character, obtains the face overall situation and local attribute that can be used for recognition of face; Some basic face characters, if sex, race etc. is the outstanding attribute for differentiating face, but only can't carry out recognition of face with these attributes completely effectively, still need to obtain some other attributes and are supplemented;
Step 2: carry out digital simulation to the face character that step 1 is chosen, to data prediction, extracts " bottom " feature;
Digital simulation carries out in the following ways, and the people in evaluation and test group first carries out primary election to the data of every attribute, data is issued other members of group afterwards, only has group member all to agree to, these data just can be added in this type of attribute data;
For global property, as sex, age etc., the advanced row Face datection of attribute data, is remedied in standard coordinate system with the key point coordinate returned by face; If x ' and y ' is the coordinate figure of conversion preceding pixel, x and y is the coordinate figure of pixel after conversion; A, b, c, d, e, f are affined transformation coefficients; Affined transformation computing formula is as follows:
x = ax ′ + by ′ + c y = dx ′ + ey ′ + f
For local attribute, as the width of eyes, the shape etc. of eyebrow, attribute data can be divided into 8 pieces of face subregions, in subregion, obtains corresponding visualization feature;
Finally, pre-service completes, and extracts sift as " bottom " feature to all properties data;
Step 3: utilize support vector machine (SVM) to test and assess to attribute data, obtains the optimization simulation of attribute and data; When the data selected are after step 2 processes, every attribute is provided to positive example (same attribute data) and the counter-example (different attribute data) of tape label, for testing attribute and data rationality;
Step 4: mixed Gauss model (GMM) obtains the visualization feature of input face; Utilize mixed Gauss model (GMM) analog nature Data distribution8, form attribute space; Calculate the posterior probability of input human face data various attribute data distribution in attribute space; The corresponding face character of posterior probability is counted as the visualization feature of face, for the face feature vector F (I) of input, according to obtaining face character V i=1...n, calculating corresponding visualization feature is:
V ( I ) = { V 1 ( F ( I ) ) , . . . , V n ( F ( I ) ) }
Step 5: adopt Z-Score method to carry out standardization to the visualization feature be quantized; Standardization formula is as follows:
V i ′ ( F ( I ) ) = V i ( F ( I ) ) - E ( V ( I ) ) std ( V ( I ) )
In formula, average, standard deviation that E (V (I)), std (V (I)) they are data; After standardization completes, the COS distance of vector is utilized to obtain inputting the similarity of data A, B:
Sim ( A , B ) = ΣV _ A ( I ) * V _ B ( I ) ( ΣV _ A ( I ) 2 ) ( ΣV _ B ( I ) 2 )
Finally use Sim (A, B) to carry out recognition of face and checking, complete recognition of face and checking.
A kind of face visual feature representation method based on attribute space provided by the invention, its advantage and good effect are:
1, the method is predicted each attribute affecting recognition of face, carries out digital simulation to attribute, utilizes mixed Gauss model to model attributes, forms attribute space.
2, the method is based on pattern-recognition and integration processing theory, excavate the effect of face visualization feature in recognition of face, in attribute space, face " bottom " feature is changed, reduce the susceptibility of factors vary in recognition of face such as attitude, illumination, expression, this achievement can also be generalized in other pattern recognition model and advanced intelligent domain model.
Accompanying drawing explanation
Fig. 1 is the FB(flow block) of the inventive method.
Fig. 2 is people face part attribute test result figure.
Fig. 3 is the mapping schematic diagram of face in attribute space.
Embodiment
Basic thought of the present invention is by carrying out modeling to face character, is associated by the face of input with attribute space, utilizes mixed Gauss model to excavate the visualization feature of input face, in high-level characteristic, carries out recognition of face and checking.
According to above thought, flow process of the present invention as shown in Figure 1, further illustrates method of the present invention below in conjunction with technical scheme and accompanying drawing.
First analyze the impact of various face character on recognition of face, obtain the face overall situation and local attribute, digital simulation and test and appraisal are carried out to various face character simultaneously.Secondly, Image semantic classification is carried out to input face and simulated data, and extracts " bottom " feature.Again, according to mixed Gauss model, the various visualization feature of input face in attribute space are also obtained to model attributes.Finally carry out recognition of face and checking.
To input human face data A and B, the concrete steps of this method are as follows:
Step 1: regard face visualization feature as face character, the prediction face overall situation and local attribute; Namely global property observes the characteristic that whole face draws, local attribute is then taken from each sub regions of face, and in practice, the present invention have chosen 64 kinds of face characters.
The face global property chosen is as follows:
The face local attribute chosen is as follows:
Step 2: carry out digital simulation to face character, to data prediction, extracts " bottom " feature;
2.1: when carrying out face character simulation, the people in evaluation and test group first carries out data primary election to this attribute, and such as attribute A1 is D by the data selected 1={ D 11, D 12..., D 1n, D 1in data from D 11start to issue other members of group successively and carry out subjectivity evaluation and test, evaluating standard is 0,1, and only have when evaluation result is all 1, these data are passed through, and are added in this type of attribute data, otherwise, then abandon.
2.2: for face global property, the advanced row Face datection of attribute data, its three key point coordinates returned are followed successively by: left eye center, right eye center, mouth center.Set a front face as standard faces, select above-mentioned three key points to be that reference point carries out affined transformation.
Two dimensional affine transformation for mula is as follows:
x = ax ′ + by ′ + c y = dx ′ + ey ′ + f
Wherein x ', y ' are the coordinate figures of conversion preceding pixel, and x, y are the coordinate figures of pixel after conversion, and a, b, c, d, e, f are affined transformation coefficients.According to above-mentioned coordinate information, obtain affined transformation coefficient and carry out face alignment.
2.3: for local attribute, as opened one's mouth, eye is closed etc., and Face datection is identical with step 2.2 with face alignment procedure, but attribute data will be divided into 8 pieces of face subregions.That is: forehead, eyebrow, eyes, nose, cheek, beard, mouth and chin.
2.4: choose " bottom " feature that sift is data.Global property data obtain feature after processing according to step 2.2: fea_global, and local attribute's data obtain feature after processing according to step 2.3: fea_local={fea_1 ..., fea_n}, in formula, n represents face zone number.Input data A or B will obtain fea_global, fea_local simultaneously.
Step 3: evaluated and tested attribute data by support vector machine (SVM), obtains the optimization simulation of data and attribute;
3.1: after attribute data is determined, form 300 pairs of positive examples and 300 pairs of counter-examples with attribute data and some non-attribute datas, by support vector machine (SVM), the test data formed is evaluated and tested.When the evaluation result of attribute is lower than benchmark, digital simulation need be re-started to this attribute, in reality, use the SVM that kernel function is RBF.Attribute evaluation and test partial results as shown in Figure 2.
Step 4: the visualization feature utilizing mixed Gauss model to obtain input face, as shown in Figure 3.
4.1: utilize mixed Gauss model (GMM) analog nature Data distribution8.As face character class C comprises round face, square face etc., in same type (C class), all properties mixed Gaussian degree K is identical.X represents the given data of Attribute class C, and mixed Gauss model parameter is estimated by X;
Step 4.2: calculate the probability of input human face data in various property distribution, form attribute space.Human face data in the posterior probability of Attribute class C be:
p ( x | Θ ) = Σ k = 1 K ω k C N ( x ; μ k C , Σ k C )
In formula, Attribute class C selects round face, square face etc. successively, represent the weights of K Gauss in attribute, average, variance;
Step 4.3: the corresponding face character of posterior probability is counted as the visualization feature of face.The proper vector F (I) of input face, according to obtaining attribute V i=1 ... n, visualization feature can be expressed as: V (I)={ V 1(F (I)) ..., V n(F (I)) }.
Step 5: adopt Z-Score method to carry out standardization to the visualization feature be quantized.Standardisation process is as follows:
V i ′ ( F ( I ) ) = V i ( F ( I ) ) - E ( V ( I ) ) std ( V ( I ) )
In formula, average, standard deviation that E (V (I)), std (V (I)) they are data.After standardization completes, the COS distance of vector is utilized to ask the similarity of input A, B.Formula is as follows:
Sim ( A , B ) = ΣV _ A ( I ) * V _ B ( I ) ( ΣV _ A ( I ) 2 ) ( ΣV _ B ( I ) 2 )
Sim (A, B) is finally utilized to carry out recognition of face and checking.

Claims (2)

1. based on a face visual feature representation method for attribute space, it is characterized in that, specifically comprise the following steps:
Step 1: prediction is used for the face overall situation and the local attribute of recognition of face;
Step 2: carry out digital simulation to the face character that step 1 is chosen, to data prediction, extracts low-level image feature; Digital simulation carries out in the following ways, and the people in evaluation and test group first carries out primary election to the data of every attribute, data is issued other members of group afterwards, only has group member all to agree to, these data just can be added in this type of attribute data;
Step 3: evaluated and tested attribute data by support vector machine, obtains the optimization simulation of attribute and data;
Step 4: utilize mixed Gauss model analog nature Data distribution8, forms attribute space, calculates the posterior probability of input human face data various attribute data distribution in attribute space, and the corresponding face character of posterior probability is considered as the visualization feature of face; For the face feature vector F (I) of input, according to obtaining face character V i=1 ... n, calculating corresponding visualization feature is:
V(I)={V 1(F(I)),…,V n(F(I))};
Step 5: adopt Z-Score method to carry out standardization to the visualization feature be quantized, standardization formula is as follows:
V i ′ ( F ( I ) ) = V i ( F ( I ) ) - E ( V ( I ) ) s t d ( V ( I ) )
In formula, average, standard deviation that E (V (I)), std (V (I)) they are data;
After standardization completes, the COS distance of proper vector is utilized to obtain inputting the similarity of data A, B:
S i m ( A , B ) = Σ V _ A ( I ) * V _ B ( I ) ( Σ V _ A ( I ) 2 ) ( Σ V _ B ( I ) 2 )
Finally use Sim (A, B) to carry out recognition of face and checking, complete recognition of face and checking.
2. a kind of face visual feature representation method based on attribute space according to claim 1, is characterized in that, the sub-attribute mixed Gaussian degree that described step 4 comprises same Attribute class is identical.
CN201310192441.1A 2013-05-22 2013-05-22 A kind of face visual feature representation method based on attribute space Expired - Fee Related CN103294998B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310192441.1A CN103294998B (en) 2013-05-22 2013-05-22 A kind of face visual feature representation method based on attribute space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310192441.1A CN103294998B (en) 2013-05-22 2013-05-22 A kind of face visual feature representation method based on attribute space

Publications (2)

Publication Number Publication Date
CN103294998A CN103294998A (en) 2013-09-11
CN103294998B true CN103294998B (en) 2016-02-24

Family

ID=49095829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310192441.1A Expired - Fee Related CN103294998B (en) 2013-05-22 2013-05-22 A kind of face visual feature representation method based on attribute space

Country Status (1)

Country Link
CN (1) CN103294998B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036151B (en) * 2014-06-30 2017-05-03 北京奇虎科技有限公司 Face attribute value calculation method and system
CN106127104A (en) * 2016-06-06 2016-11-16 安徽科力信息产业有限责任公司 Prognoses system based on face key point and method thereof under a kind of Android platform
CN107704838B (en) * 2017-10-19 2020-09-25 北京旷视科技有限公司 Target object attribute identification method and device
CN109033447B (en) * 2018-08-20 2022-05-31 合肥智圣新创信息技术有限公司 Face recognition data visualization system
CN111967389B (en) * 2020-08-18 2022-02-18 厦门理工学院 Face attribute recognition method and system based on deep double-path learning network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777131A (en) * 2010-02-05 2010-07-14 西安电子科技大学 Method and device for identifying human face through double models

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2408615B (en) * 2003-10-09 2006-12-13 Univ York Image recognition

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777131A (en) * 2010-02-05 2010-07-14 西安电子科技大学 Method and device for identifying human face through double models

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Kind of Human Face Region Detection and Recognition Method Based on Chrominance Information Characteristics;LIN Hai-bo;《Control Engineering and Communication Technology (ICCECT), 2012 International Conference on》;IEEE;20121209;第469-472页 *
多特征局部与全局融合的人脸识别方法;舒畅 等;《计算机工程》;20111005;第37卷(第19期);第145-147,156页 *

Also Published As

Publication number Publication date
CN103294998A (en) 2013-09-11

Similar Documents

Publication Publication Date Title
CN102663413B (en) Multi-gesture and cross-age oriented face image authentication method
Gao et al. A segmentation-aware object detection model with occlusion handling
CN106358444B (en) Method and system for face verification
CN103268497B (en) A kind of human face posture detection method and the application in recognition of face
CN103294998B (en) A kind of face visual feature representation method based on attribute space
CN102799901B (en) Method for multi-angle face detection
CN102013011B (en) Front-face-compensation-operator-based multi-pose human face recognition method
CN102968643B (en) A kind of multi-modal emotion identification method based on the theory of Lie groups
Zheng et al. Attention-based spatial-temporal multi-scale network for face anti-spoofing
CN107748873A (en) A kind of multimodal method for tracking target for merging background information
CN106469298A (en) Age recognition methodss based on facial image and device
CN106355138A (en) Face recognition method based on deep learning and key features extraction
CN102270308B (en) Facial feature location method based on five sense organs related AAM (Active Appearance Model)
CN103020614B (en) Based on the human motion identification method that space-time interest points detects
CN103632132A (en) Face detection and recognition method based on skin color segmentation and template matching
CN103984919A (en) Facial expression recognition method based on rough set and mixed features
CN105373777A (en) Face recognition method and device
Duan et al. Expression of Concern: Ethnic Features extraction and recognition of human faces
CN104794441B (en) Human face characteristic positioning method based on active shape model and POEM texture models under complex background
Danelakis et al. An effective methodology for dynamic 3D facial expression retrieval
CN106203329A (en) A kind of set up identity template and the method carrying out identification based on eyebrow
CN109543656A (en) A kind of face feature extraction method based on DCS-LDP
CN103077383B (en) Based on the human motion identification method of the Divisional of spatio-temporal gradient feature
Brown et al. Making corgis important for honeycomb classification: Adversarial attacks on concept-based explainability tools
Hsu et al. Automatic extraction of face contours in images and videos

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160224

Termination date: 20190522

CF01 Termination of patent right due to non-payment of annual fee