CN109190518B - Face verification method based on universal set metric learning - Google Patents
Face verification method based on universal set metric learning Download PDFInfo
- Publication number
- CN109190518B CN109190518B CN201810925973.4A CN201810925973A CN109190518B CN 109190518 B CN109190518 B CN 109190518B CN 201810925973 A CN201810925973 A CN 201810925973A CN 109190518 B CN109190518 B CN 109190518B
- Authority
- CN
- China
- Prior art keywords
- face
- image
- data set
- difference
- representing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Abstract
The invention discloses a face verification method based on universal set metric learning, belonging to the field of face verification, which respectively extracts the features of a face from face images in a known face data set alpha and an unknown face data set beta, and obtains the difference between the data set alpha and the data set beta by using set metric; obtaining a distance measurement function and a distance measurement standard by using the difference quantity; constructing a decision function by using the distance measurement function and the distance measurement standard, and solving the minimum value of an empirical risk function in the constructed decision function by using a cross gradient descent algorithm, wherein a known face image corresponding to the minimum value of the empirical risk function is a verification result of the unknown face image; the invention can improve the accuracy of face verification.
Description
Technical Field
The invention relates to the field of face verification, in particular to a face verification method based on universal set metric learning.
Background
With the development of science and technology and the improvement of living standard of people, the face automatic identification technology is also widely researched and developed, and the face identification is also one of the most popular research subjects in pattern identification and image processing in about 30 years. The face recognition is to analyze a face video or image by using a computer, extract effective face feature recognition information and the like, and finally judge the identity of a face object. In general, face recognition problems are macroscopically divided into two categories: face recognition and face verification. The most common application scene is face unlocking, and the photos registered by the user in advance are compared with the photos collected on site by using terminal equipment to judge whether the photos are the same person, so that identity authentication can be completed.
In recent years, as the face recognition technology is applied to daily life more and more, the requirement of people on the face recognition precision is higher and higher, meanwhile, the face verification technology has become an important research direction in the field of face recognition, and the problems encountered in face verification also arouse the interest and research of researchers. In a face data set, it is often encountered that the change of the illumination intensity, the change of the scale and the view angle, the change of the facial expression and the change of the monitoring camera device, and the like of a target face image are low in resolution, in different scenes, which causes the change of the appearance difference of the same face in different scenes to be huge, and this also makes the face verification technology meet the challenge.
Disclosure of Invention
The invention aims to: the human face verification method based on the universal set metric learning solves the technical problems that in the existing human face verification, the accuracy of the human face verification is low and the like due to a series of problems of change of illumination intensity, change of scale and visual angle, change of human face expression posture and change of equipment configuration of different monitoring cameras.
The technical scheme adopted by the invention is as follows:
a face verification method based on universal set metric learning comprises the following steps:
step 1: respectively extracting the characteristics of the human face from the human face images in the known human face data set alpha and the unknown human face data set beta, and obtaining the difference between the data set alpha and the data set beta by using set measurement;
step 2: obtaining a distance measurement function and a distance measurement standard by using the difference quantity;
and step 3: and constructing a decision function by using the distance measurement function and the distance measurement standard, and solving the minimum value of the empirical risk function in the constructed decision function by using a cross gradient descent algorithm, wherein the known face image corresponding to the minimum value of the empirical risk function is the verification result of the unknown face image.
Further, in the step 1, preprocessing is further performed on the face image, where the preprocessing includes framing the face in the face image and aligning the face image through a rotation operation.
Further, the expression of the difference amount in step 1 is as follows:
Fi α=[f1 α-c1 α;f2 α-c2 α;...;fNr α-cNr α] (1),
Fi β=[f1 β-c1 β;f2 β-c2 β;...;fNr β-cNr β] (2),
wherein f isi αA feature vector, f, representing the ith image in the known face data set alphai βThe feature vector of the ith image in the unknown face data set beta,representing the amount of difference of the ith image in the known face data set alpha,representing the amount of difference, N, of the ith image in the unknown face data set betafNumber of features representing image, NrNumber of pictures in reference set, { c1 α,c2 α,...,cNr αDenotes a face image feature in a known face data set alpha, ac1 β,c2 β,...,cNr βAnd represents the facial image features in the unknown facial data set beta.
Further, the expression of the distance metric function in step 2 is:
wherein the content of the first and second substances,a left-hand sub-quantity representing the amount of interactive disparity projection,the right-hand multiplier represents the amount of the interactive disparity projection, and F represents the arithmetic square root of the sum of the squares of all elements.
Further, the expression of the distance metric in step 2 is:
wherein, biA distance metric representing the ith image, S represents a set of similar pairs, and D represents a set of dissimilar pairs,representing the amount of difference of the ith image in the known face data set alpha,representing the amount of difference, N, of the ith image in the unknown face data set betafNumber of features representing image, NrIndicating the number of pictures in the reference set.
Further, in step 3, the expression of the decision function is:
f(Fi α,Fi β;L,R)=T-dL,R(Fi α,Fi β) (5),
wherein T represents a global decision threshold;
the expression of the empirical risk function is:
wherein, wiIs the weight of the ith image disparity amount.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. in order to enable the face verification method to be more efficient and further improve the accuracy, the invention innovatively uses an effective logic judgment criterion, utilizes original data and characteristic information to design a local self-adaptive decision rule in the training process, and enables the output to be only 1 and-1 by adopting a logic judgment method, thereby greatly reducing the calculated amount to a certain extent, accelerating the operation speed, and enabling the results to be more clear and understandable when analyzing the results.
2. On the basis of the traditional method for constructing a distance measurement expression by using vector measurement, the invention innovatively provides a method for replacing vector measurement by using collective measurement, and not only introduces difference description Fi αAnd Fi βThe interactive difference projection quantity L is introduced as a left multiplication quantum, R is introduced as a right multiplication quantum, the two projection quanta L and R jointly form an aggregate measurement, so that L has an effect on each difference, and R has an effect on different differences to make the differences more obvious, namely, the distance between the same human faces is reduced as much as possible and the distance between different human faces is increased as much as possible, and the result can be visually judged just by the measurement standard based on the aggregate measurement learning,and meanwhile, the accuracy is improved to a certain extent. Meanwhile, the parameters and the limits of the L are reduced, the constraint is conveniently and properly added according to specific conditions, certain positive influence is generated on the improvement of the face verification technology, and the method is more favorable for being applied to daily life of people.
3. When the features are extracted, various differences of the same face under different conditions can be caused by a series of problems such as change of illumination intensity, change of scale and visual angle, change of face posture expression (such as laugh, howline and cry, face ghost and the like), overescape of equipment conditions during camera shooting and the like, so that problems such as failure of face verification and low accuracy rate during face verification are causedi αAnd come Fi βThe description of the difference quantity is more efficient than the vector feature description adopted before, and some adverse effects caused by changes of environmental factors, scene factors or camera conditions and the like can be greatly weakened, so that the operations of feature description, extraction and the like of the target face image can be performed in a targeted manner.
Drawings
The invention will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1 is a finishing flow diagram of the present invention;
FIG. 2 is a diagram showing the construction process of the relationship between the difference and the projection;
FIG. 3 is a comparison of the process of converting from a vector distance metric to a collective metric based distance metric in the present invention;
fig. 4 is an explanatory diagram of collective metric learning according to the present invention.
Detailed Description
All of the features disclosed in this specification, or all of the steps in any method or process so disclosed, may be combined in any combination, except combinations of features and/or steps that are mutually exclusive.
The present invention is described in detail below with reference to fig. 1-4.
A face verification method based on universal set metric learning comprises the following steps:
step 1: respectively extracting the characteristics of the human face from the human face images in the known human face data set alpha and the unknown human face data set beta, and obtaining the difference between the data set alpha and the data set beta by using set measurement;
step 2: obtaining a distance measurement function and a distance measurement standard by using the difference quantity;
and step 3: and constructing a decision function by using the distance measurement function and the distance measurement standard, and solving the minimum value of the empirical risk function in the constructed decision function by using a cross gradient descent algorithm, wherein the known face image corresponding to the minimum value of the empirical risk function is the verification result of the unknown face image.
Further, in the step 1, preprocessing is further performed on the face image, where the preprocessing includes framing the face in the face image and aligning the face image through a rotation operation.
Further, the expression of the difference amount in step 1 is as follows:
Fi α=[f1 α-c1 α;f2 α-c2 α;...;fNr α-cNr α] (8),
Fi β=[f1 β-c1 β;f2 β-c2 β;...;fNr β-cNr β] (9),
wherein f isi αA feature vector, f, representing the ith image in the known face data set alphai βThe feature vector of the ith image in the unknown face data set beta,representing the amount of difference of the ith image in the known face data set alpha,representing the amount of difference, N, of the ith image in the unknown face data set betafNumber of features representing image, NrNumber of pictures in reference set, { c1 α,c2 α,...,cNr αDenotes the face image features in the known face data set α, { c1 β,c2 β,...,cNr βAnd represents the facial image features in the unknown facial data set beta.
Further, the expression of the distance metric function in step 2 is:
wherein the content of the first and second substances,a left-hand sub-quantity representing the amount of interactive disparity projection,the right-hand multiplier represents the amount of the interactive disparity projection, and F represents the arithmetic square root of the sum of the squares of all elements.
Further, the expression of the distance metric in step 2 is:
wherein, biA distance metric representing the ith image, S represents a set of similar pairs, and D represents a set of dissimilar pairs,representing the amount of difference of the ith image in the known face data set alpha,express unknownAmount of difference, N, of ith image in face data set betafNumber of features representing image, NrIndicating the number of pictures in the reference set.
Further, in step 3, the expression of the decision function is:
f(Fi α,Fi β;L,R)=T-dL,R(Fi α,Fi β) (12),
wherein T represents a global decision threshold;
the expression of the empirical risk function is:
wherein, wiIs the weight of the ith image disparity amount.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
As shown in fig. 1, the method includes: firstly, the collected image needs to be subjected to feature extraction, and set measurement F is adopted in a feature space through learningi αAnd Fi βTo describe the difference quantity, by transforming the vector fi αAnd fi βIs converted to a difference quantity Fi αAnd Fiβ, the difference amount thereof and the distance thereof are defined as follows:
Fi α=[f1 α-c1 α;f2 α-c2 α;...;fNr α-cNr α] (15)
Fi β=[f1 β-c1 β;f2 β-c2 β;...;fNr β-cNr β] (16)
Fi α-Fi β
=[(f1 α-f1 β)-(c1 α-c1 β);(f2 α-f2 β)-(c2 α-c2 β);...;(fNr α-fNr β)-(cNr α-cNr β)]
(17)
in the present invention, two sets of reference image data sets are selected, where c1 α,c2 α,...,cNr αRepresents the facial image characterization from the dataset α; and { c1 β,c2 β,...,cNr βRepresents the facial image characterization from the dataset β; (f)i α,fi β) Then in the form of vectors for feature descriptions; (F)i α,Fi β) Then in the form of a matrix of feature descriptions; and Fi α-FiBeta represents the distance of the difference quantity, thereby facilitating a series of subsequent operations of us to a certain extent.
Introducing interactive difference projection quantity L as a left multiplication quantum and R as a right multiplication quantum, and forming an aggregate metric by the two projection quanta L and R together, thereby constructing a distance metric representation form d based on a matrixL,R(Fi α,Fi β) The specific distance metric expression is as follows:
construction of a decision function F (F)i α,Fi β(ii) a L, R), if f is more than 0, the two related faces are similar; on the contrary, when f is less than or equal to 0, the description is dissimilar. Detailed description of the inventionThe formula is as follows:
f(Fi α,Fi β;L,R)=T-dL,R(Fi α,Fi β) (19)
wherein, T is used as a global decision threshold, and is mainly used for comparing the distances between two related faces so as to determine whether they are similar.
Finally, by adopting a cross gradient descent method, continuously and iteratively solving the objective function J (L, R) to obtain a minimum empirical risk value, wherein an expression of the empirical risk is as follows:
wherein f is a decision function,is a loss function and is monotonically decreasing; w is aiIs the weight described for the ith difference quantity.
The gradient is solved for the objective function J (L, R) as follows:
and continuously performing cross gradient descent and iterative solution until the set threshold is reached, so as to obtain the optimal solution and obtain the minimum empirical risk value, wherein the known image with the minimum empirical risk value is the verification result of the unknown image.
Fig. 2 is a diagram showing the construction process of the difference quantity according to the present invention, considering that the effectiveness of the left multiplication is to add a weight to each difference and the effectiveness of the right multiplication is to contribute to different differences. Different from the vector measurement method used in the past, the method adopts the set measurement and introduces the interactive difference projection quantityAs a left multiplier of the series of digital signals,as a right multiplier, the two projection quanta L and R together form an aggregate metric. Where this means that L contributes to each difference and R contributes to different differences. And as shown in fig. 2, in the difference, based on the principle of multiplication of set metrics, the row of L and the column of difference are combined with each other, and similarly, the column of R and the row of difference are combined with each other, which also illustrates the construction process of the relationship between the difference and the two projection quanta L and R.
Fig. 3 is a comparison diagram of the conversion process from the vector distance metric to the set metric based distance metric in the present invention. Compared with the conventional method and the method proposed by us, the method has the following great improvements in feature description and distance measurement compared with the prior method aiming at the face picture information in two different data sets: for feature description, it is derived from the vector form (f)i α,fi β) Convert to matrix form (F)i α,Fiβ) and the original feature projection L is transformed into internal disparity projections L and R, so that the distance metric expression d is expressed by the original vector-based formL(fi α,fi β) Becomes a distance metric expression d based on set metricL,R(Fi α,Fi β). As shown in FIG. 3, wherein NfIs a dimension of a feature vector, NrRefers to the number of reference picture sets.
Fig. 4 is an explanatory diagram of ensemble metric learning proposed in the present invention, which is mainly divided into two parts, one is a related item, which mainly makes the combination of the differences of the same face closer, and the other is an unrelated item, which mainly makes the combination of the differences of different faces more separate. This constitutes a basic process for collective metric learning.
The above-described embodiments are merely preferred implementations of the present invention, and not intended to limit the scope of the invention, which is defined by the claims and their equivalents, and all changes in structure and equivalents of the claims and their equivalents are intended to be embraced therein.
Claims (4)
1. A face verification method based on universal set metric learning is characterized in that: the method comprises the following steps:
step 1: respectively extracting the characteristics of the human face from the human face images in the known human face data set alpha and the unknown human face data set beta, and obtaining the difference between the data set alpha and the data set beta by using set measurement;
step 2: obtaining a distance measurement function and a distance measurement standard by using the difference quantity;
and step 3: constructing a decision function by using the distance measurement function and the distance measurement standard, and solving the minimum value of an empirical risk function in the constructed decision function by using a cross gradient descent algorithm, wherein a known face image corresponding to the minimum value of the empirical risk function is a verification result of the unknown face image;
the expression of the distance metric function in step 2 is as follows:
wherein the content of the first and second substances,a left-hand sub-quantity representing the amount of interactive disparity projection,representing a right multiplier of the interactive difference projection quantity, and F representing the arithmetic square root of the sum of squares of all elements;representing the amount of difference of the ith image in the known face data set alpha,representing the amount of difference, N, of the ith image in the unknown face data set betafNumber of features representing image, NrIndicating the number of pictures in the reference set.
2. The method for verifying the human face based on the universal set metric learning of claim 1, wherein: in the step 1, the method further comprises preprocessing the face image, wherein the preprocessing comprises framing the face in the face image and aligning the face image through rotation operation.
3. The method for verifying the human face based on the universal set metric learning of claim 1, wherein: the expression of the distance metric in step 2 is as follows:
wherein, biA distance metric representing the ith image, S represents a set of similar pairs, and D represents a set of dissimilar pairs,representing the amount of difference of the ith image in the known face data set alpha,representing the difference of the ith image in the unknown face data set betaIso amount, NfNumber of features representing image, NrIndicating the number of pictures in the reference set.
4. The method for verifying the human face based on the universal set metric learning of claim 1, wherein: in step 3, the expression of the decision function is:
f(Fi α,Fi β;L,R)=T-dL,R(Fi α,Fi β)
wherein T represents a global decision threshold;
the expression of the empirical risk function is:
wherein, wiIs the weight of the ith image disparity amount.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810925973.4A CN109190518B (en) | 2018-08-14 | 2018-08-14 | Face verification method based on universal set metric learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810925973.4A CN109190518B (en) | 2018-08-14 | 2018-08-14 | Face verification method based on universal set metric learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109190518A CN109190518A (en) | 2019-01-11 |
CN109190518B true CN109190518B (en) | 2022-03-18 |
Family
ID=64921794
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810925973.4A Active CN109190518B (en) | 2018-08-14 | 2018-08-14 | Face verification method based on universal set metric learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109190518B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102592148A (en) * | 2011-12-29 | 2012-07-18 | 华南师范大学 | Face identification method based on non-negative matrix factorization and a plurality of distance functions |
US8233702B2 (en) * | 2006-08-18 | 2012-07-31 | Google Inc. | Computer implemented technique for analyzing images |
CN104765768A (en) * | 2015-03-09 | 2015-07-08 | 深圳云天励飞技术有限公司 | Mass face database rapid and accurate retrieval method |
CN105678260A (en) * | 2016-01-07 | 2016-06-15 | 浙江工贸职业技术学院 | Sparse maintenance distance measurement-based human face identification method |
CN106599833A (en) * | 2016-12-12 | 2017-04-26 | 武汉科技大学 | Field adaptation and manifold distance measurement-based human face identification method |
CN107657223A (en) * | 2017-09-18 | 2018-02-02 | 华南理工大学 | It is a kind of based on the face authentication method for quickly handling more learning distance metrics |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9275269B1 (en) * | 2012-11-09 | 2016-03-01 | Orbeus, Inc. | System, method and apparatus for facial recognition |
-
2018
- 2018-08-14 CN CN201810925973.4A patent/CN109190518B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8233702B2 (en) * | 2006-08-18 | 2012-07-31 | Google Inc. | Computer implemented technique for analyzing images |
CN102592148A (en) * | 2011-12-29 | 2012-07-18 | 华南师范大学 | Face identification method based on non-negative matrix factorization and a plurality of distance functions |
CN104765768A (en) * | 2015-03-09 | 2015-07-08 | 深圳云天励飞技术有限公司 | Mass face database rapid and accurate retrieval method |
CN105678260A (en) * | 2016-01-07 | 2016-06-15 | 浙江工贸职业技术学院 | Sparse maintenance distance measurement-based human face identification method |
CN106599833A (en) * | 2016-12-12 | 2017-04-26 | 武汉科技大学 | Field adaptation and manifold distance measurement-based human face identification method |
CN107657223A (en) * | 2017-09-18 | 2018-02-02 | 华南理工大学 | It is a kind of based on the face authentication method for quickly handling more learning distance metrics |
Non-Patent Citations (3)
Title |
---|
An Overview and Empirical Comparison of Distance Metric Learning Methods;Panagiotis Moutafis 等;《IEEE TRANSACTIONS ON CYBERNETICS》;20170330;第47卷(第3期);612-625 * |
Regularizing face verification nets for pain intensity regression;F. Wang 等;《2017 IEEE International Conference on Image Processing》;20180222;1087-1091 * |
基于集成人脸对距离学习的跨年龄人脸验证;吴嘉琪 等;《模式识别与人工智能》;20171215;第30卷(第12期);第1114-1120页第1.1节第2-4段、1.2节第1-3、5段、图1 * |
Also Published As
Publication number | Publication date |
---|---|
CN109190518A (en) | 2019-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhai et al. | Detecting vanishing points using global image context in a non-manhattan world | |
Zheng et al. | Cartoon face recognition: A benchmark dataset | |
JP6244059B2 (en) | Face image verification method and face image verification system based on reference image | |
CN111160297A (en) | Pedestrian re-identification method and device based on residual attention mechanism space-time combined model | |
US20180089534A1 (en) | Cross-modiality image matching method | |
CN111104867B (en) | Recognition model training and vehicle re-recognition method and device based on part segmentation | |
Thapar et al. | VGR-net: A view invariant gait recognition network | |
CN105740779B (en) | Method and device for detecting living human face | |
Zhang et al. | Detecting and extracting the photo composites using planar homography and graph cut | |
CN107424161B (en) | Coarse-to-fine indoor scene image layout estimation method | |
Lin et al. | Learning modal-invariant and temporal-memory for video-based visible-infrared person re-identification | |
CN107203745B (en) | Cross-visual angle action identification method based on cross-domain learning | |
WO2019176235A1 (en) | Image generation method, image generation device, and image generation system | |
WO2013075295A1 (en) | Clothing identification method and system for low-resolution video | |
CN108470178B (en) | Depth map significance detection method combined with depth credibility evaluation factor | |
CN107944395B (en) | Method and system for verifying and authenticating integration based on neural network | |
CN109522881A (en) | A kind of examinee information checking method based on recognition of face | |
CN111639580A (en) | Gait recognition method combining feature separation model and visual angle conversion model | |
Liu et al. | Aurora guard: Real-time face anti-spoofing via light reflection | |
Hsu et al. | GAITTAKE: Gait recognition by temporal attention and keypoint-guided embedding | |
Xie et al. | Inducing predictive uncertainty estimation for face recognition | |
CN109190518B (en) | Face verification method based on universal set metric learning | |
JP2013218605A (en) | Image recognition device, image recognition method, and program | |
Zheng et al. | Weight-based sparse coding for multi-shot person re-identification | |
CN115203663B (en) | Small-view-angle long-distance video gait accurate identification identity authentication system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |