CN105469076A - Face comparing verification method based on multi-instance learning - Google Patents

Face comparing verification method based on multi-instance learning Download PDF

Info

Publication number
CN105469076A
CN105469076A CN201511020705.0A CN201511020705A CN105469076A CN 105469076 A CN105469076 A CN 105469076A CN 201511020705 A CN201511020705 A CN 201511020705A CN 105469076 A CN105469076 A CN 105469076A
Authority
CN
China
Prior art keywords
face
equity
facial image
instance learning
instance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201511020705.0A
Other languages
Chinese (zh)
Other versions
CN105469076B (en
Inventor
陈友斌
廖海斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Micropattern Corp
Original Assignee
Dongguan Micropattern Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Micropattern Corp filed Critical Dongguan Micropattern Corp
Priority to CN201511020705.0A priority Critical patent/CN105469076B/en
Publication of CN105469076A publication Critical patent/CN105469076A/en
Application granted granted Critical
Publication of CN105469076B publication Critical patent/CN105469076B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face comparing verification method based on multi-instance learning, applied to a people and certificate consistency verification occasion. The method performs face comparing verification based on the thought of multi-instance learning, and comprises the following steps of: S1, face image preprocessing; S2, face multi-instance learning training; and S3, face verification. The face image preprocessing step comprises face detection, feature point locating and DoG light treatment; the face multi-instance learning training step comprises face multi-instance definition, multi-instance feature extraction and multi-instance feature fusion; and the face verification step is to perform face consistency verification according to stock equity of each instance and similarity of matched instances in the step S2. The method solves a difficult problem of change of hair style, skin color, making up, micro plastic, and so on, in face comparing verification, provides an effective algorithm and a train of thought for face verification, and improves reliability of face verification. The method provided by the invention can be widely applied to the people and certificate consistency verification occasion for checking whether certificates, such as a 2nd-generation ID card, a passport, a driving license and a student ID card, are held by owners thereof or not.

Description

Based on the face alignment verification method of multi-instance learning
Technical field
The present invention relates to image procossing, pattern-recognition and technical field of computer vision, particularly a kind of face verification method based on multi-instance learning.
Background technology
Face alignment checking determines whether face to be identified is someone, i.e. the problem of " sameornot ", this is man-to-man matching process.The identity (as name or user name) that system is first claimed according to person to be identified, find out this facial image stored, then by certain decision-making or matching principle, the image stored and facial image to be identified are contrasted, thus judges the authenticity of person's identity to be identified.Face alignment checking can be widely used in the automatic testimony of a witness consistency checking of computing machine of the certificates such as China second-generation identity card, passport, driver's license, admission card for entrance examination, the pass and student's identity card.
Face alignment verification technique is through the development of over half a century, and correlation theory is mature on the whole.But in actual applications, the changes such as the such as hair style caused due to change of age, the colour of skin and micro-shaping all can make the reliability of system sharply decline.And current mostly face comparison method, be all difficult to overcome and thisly change the impact brought due to hair style, the colour of skin and micro-shaping etc.
Want computing machine to possess the face alignment ability the same with the mankind at present to be also difficult to, main cause is that two amplitude ratios all can produce tremendous influence to its accuracy to the change of the factors such as the illumination between face, hair style, age, cosmetic and micro-shaping, and the impact how eliminating these factors is the problem needing solution at present badly.
The middle and later periods nineties 20th century, the people such as T.GDictterich are studied a pharmaceutically active forecasting problem.Its objective is and allow learning system pass through to know that the molecule being suitable for or being unsuitable for pharmacy is analyzed to oneself, as far as possible correctly to predict whether certain new molecule is applicable to manufacturing medicine.In order to address this problem, the people such as T.GDietterich are using each molecule as a bag, and each low energy shape of molecule, as an example in bag, thus proposes the concept of multi-instance learning.Because multi-instance learning has unique character and application prospect widely, belong to the blind area that machine learning is in the past studied.Therefore, cause great repercussion in international machine learning circle, be considered to a kind of new learning framework.For face alignment, based on face method express one's feelings, block generation time, these interference are all introduced in the judgement of recognition of face, thus have impact on the performance of recognition of face.And if utilize the information of the many examples of face, algorithm can give each several part different weights for expression, the adaptability of blocking according to each example, utilizes blending algorithm in conjunction with the result of each several part, thus improves the final accuracy judged.Therefore, the present invention proposes many examples face alignment verification method.
Multi-instance learning is applied to face alignment checking, although be a new concept, it is not isolatedly to exist in field of face identification, and the face identification method at present based on part/parts/locally/piecemeal is exactly its forerunner.But above method just simply utilizes the scheme of face piecemeal solve human face expression, attitude and the variation issue such as to block.The factors vary such as hair style, age, cosmetic and micro-shaping are not furtherd investigate.
Summary of the invention
The object of the invention is to overcome the shortcoming of existing face alignment verification technique and deficiency, a kind of face alignment verification method based on multi-instance learning is provided, overcoming the problems such as the hair style in face verification, cosmetic and micro-shaping, providing a kind of effective algorithm and thinking for solving face alignment checking.
Object of the present invention is achieved through the following technical solutions:
Based on a face alignment verification method for multi-instance learning, comprise the following steps:
Facial image pre-treatment step, to two amplitude ratios to image carry out respectively Face datection and key point locate after normalization onesize, and carry out photo-irradiation treatment;
Face multi-instance learning training step, carries out the many exemplary definition of facial image, many exemplary characteristics extract and many exemplary characteristics merge, and calculate the equity of each exemplary characteristics vector;
Face verification step, by merging the right equity of above-mentioned each example, formulating corresponding ballot criterion and carrying out face verification.
Preferably, described facial image pre-treatment step is specially:
By AdaBoost algorithm or degree of deep learning algorithm, to image, Face datection is carried out respectively to two amplitude ratios and extract clean facial image;
Adopt face key point extraction algorithm (e.g., ASM, SDM and degree of depth study) to carry out face key feature point, carry out face alignment normalization according to the face key point oriented;
DoG wave filter is adopted to carry out facial image illumination process.
Preferably, described face multi-instance learning training step is specially:
Facial image many exemplary definition sub-step, for may exist in face verification hair style, the colour of skin, cosmetic and micro-shaping change, adopt suitable face many exemplary definition scheme;
Many exemplary characteristics extract sub-step, adopt LBP extract the textural characteristics of face example and adopt SIFT (or SURF) algorithm to extract direction and the scale feature of face example, make the face example image feature of extraction have robustness and complementarity;
Many exemplary characteristics fusant step, calculates the equity of each face example images proper vector, for last differentiation provides foundation.
Preferably, described face many exemplary definition scheme is specific as follows:
Facial image is divided into the second level conditions of the first level conditions of the corresponding overall situation and corresponding local, wherein said first level conditions is given a definition 3 Global Face example images, and described second level conditions is given a definition 12 local facial's example images.
Preferably, LBP method and SIFT (or SURF) algorithm is adopted to carry out the extraction of many exemplary characteristics respectively to 15 the face example images defined in described facial image many exemplary definition sub-step, wherein said LBP method extracts face textural characteristics, and described SIFT (or SURF) algorithm extracts face direction and scale feature.
Preferably, the equity of described facial image proper vector comprises static equity and dynamic equity.Wherein, described static equity obtains according to great amount of samples off-line training, is changeless original stock; Described dynamic equity calculates according to online pairing image self character, is the additional stock of dynamic change; The weights of each face example images under the Allocation of Equity system that the equity of described facial image proper vector adopts static equity to combine with dynamic equity calculates two kinds of different characteristics respectively.
Preferably, described many exemplary characteristics fusant step is specially:
Calculate the right static equity of each face example images and dynamic equity respectively, being calculated as follows of wherein said static equity:
Collecting some to mating facial image to as training sample, by described many exemplary characteristics, sub-step being extracted to described training sample and extracts 30 exemplary characteristics vectors, and calculating all images to the similarity s between each example;
Count differentiation threshold values φ and the resolution of each example according to the similarity s between described each example, count the degree of confidence static equity exemplarily of each example according to resolution, be designated as Ω.
Preferably, described many exemplary characteristics fusant step is specially:
Calculate the static equity of each face example images proper vector and dynamic equity respectively, wherein said dynamic equity is mated the right mutual information of facial image and example by the entropy of facial image example, example and is mated the right confidence level of facial image three factors and determine;
Wherein, for given facial image example I (x, y), its entropy is:
E [ I ( x , y ) ] = - Σ i = 1 N g p i l o g ( p i )
Wherein, p ibe i-th ththe probability of gray level, N gfor gray level sum;
For given example coupling facial image to { I 1(x, y): I 2(x, y) }, its mutual information is:
M I ( I 1 , I 2 ) = Σ x ∈ I 1 Σ y ∈ I 2 p ( x , y ) l o g ( p ( x , y ) p 1 ( x ) p 2 ( y ) )
Wherein, p (x, y) is I 1and I 2joint probability distribution, p 1(x), p 2y () is respectively I 1and I 2marginal probability distribution;
For given example coupling facial image to { I 1(x, y): I 2(x, y) }, its confidence level is:
C ( I 1 , I 2 ) = 2 | s - φ | φ
Wherein, s is the similarity that example image is right, and φ is the threshold values of example.
Preferably, the equity of described facial image exemplary characteristics vector is:
w i=Ω i+(E i 2×MI i+C i)i∈[1,2,…,30]。
Preferably, described many example fusion criterions are:
Adopt equity ballot system: the equity that the differentiation result by each example is multiplied by its correspondence draws the value of its votes, and then merging each example votes is worth comparison result.
The present invention has following advantage and effect relative to prior art:
1) the present invention adopts LBP and SIFT (or SURF) algorithm to extract many examples face characteristic, and wherein LBP extracts face textural characteristics, and SIFT (or SURF) algorithm extracts face direction and scale feature.Therefore, two kinds of features have complementarity, and these two kinds of features also have rapidity simultaneously, can meet requirement of real-time in practical application completely.
2) owing to the present invention is based on multi-instance learning face detection principle, so the change difficult problems such as hair style in automatic face verification, the colour of skin, cosmetic and micro-shaping can be solved, the reliability of system is improved.
4) many examples that the present invention adopts merge criterion and have application flexibility.Such as, when extreme loose application scenario, one-ticket pass can be adopted to cross system, as long as namely there is an example to think same person, be just verified; In extremely strict application scenario, democracy can be adopted to vote and to make, namely have example over half to think same person, be just verified; In general application scenario, there is 1/3 example to think same person, be just verified.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is that a kind of China second-generation identity card based on multi-instance learning disclosed in the present invention falsely uses checking applicating example figure;
Fig. 2 is about the facial image many exemplary definition schematic diagram under the first level conditions in the embodiment of the present invention;
Fig. 3 is about the facial image many exemplary definition schematic diagram under the second level conditions in the embodiment of the present invention.
Embodiment
For making object of the present invention, technical scheme and advantage clearly, clearly, developing simultaneously referring to accompanying drawing, the present invention is described in more detail for embodiment.Exemplary, embodiment is falsely used with second generation identity card and is verified as example and is described.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
Term " first ", " second ", " the 3rd " and " the 4th " etc. in instructions of the present invention and claims and above-mentioned accompanying drawing are for distinguishing different object, instead of for describing particular order.In addition, term " comprises " and " having " and their any distortion, and intention is to cover not exclusive comprising.Such as contain the process of series of steps or unit, method, system, product or equipment and be not defined in the step or unit listed, but also comprise the step or unit do not listed alternatively, or also comprise alternatively for other intrinsic step of these processes, method, product or equipment or unit.
To be described in detail respectively according to embodiment below.
Embodiment
The disclosed second generation auth method based on multi-instance learning of the embodiment of the present invention mainly carries out China second-generation identity card according to the thought of multi-instance learning and falsely uses checking, mainly comprises following step:
S1 man face image acquiring, the pre-service of S2 facial image, the training of S3 face multi-instance learning and S4 I.D. falsely use checking four major part, as shown in Figure 1.Wherein man face image acquiring step comprises I.D. surface and on-the-spot man face image acquiring; Facial image pre-treatment step comprises 3 sub-steps such as AdaBoost Face datection, ASM facial modeling and DoG photo-irradiation treatment; Face multi-instance learning training step comprises 3 steps such as the many exemplary definition of face, the extraction of many exemplary characteristics and the fusion of many exemplary characteristics; I.D. is falsely used verification step and is carried out comprehensive ID card verification according to each example equity drawn in the 3rd step and differentiation result.One by one each step is described in detail below:
Step S1, man face image acquiring
Scanning is carried out to China second-generation identity card and obtains I.D. surface facial image as the first facial image; Gather the on-the-spot facial image of licensee as the second facial image simultaneously.
Above-mentioned first facial image and the second facial image complete image acquisition by two specific image acquisition devices respectively.
Step S2, facial image pre-service
To the first facial image collected and the second facial image carry out respectively Face datection and key point locate after normalization onesize, and carry out photo-irradiation treatment.
Because the facial image collected is usually containing more background interference.Therefore, the inventive method is carried out Face datection by AdaBoost algorithm and is extracted clean facial image.Meanwhile, adopt active shape model (ASM) method to carry out face key feature point, carry out face alignment normalization according to the face key point oriented.Finally, in order to overcome the impact of illumination, DoG wave filter is adopted to carry out facial image illumination process.
(1) Adaboost algorithm is a kind of iterative algorithm, its core concept trains different sorters (Weak Classifier) for same training set, then these weak classifier set are got up, form a stronger final sorter (strong classifier).Its algorithm itself realizes by changing Data distribution8, and whether it is correct according to the classification of sample each among each training set, and the accuracy rate of the general classification of last time, determines the weights of each sample.Give sub classification device by the new data set revising weights to train, finally will the sorter obtained be trained finally to merge, as last Decision Classfication device at every turn.Use Adaboost sorter can get rid of some unnecessary training data features, and be placed on above crucial training data.
(2) active shape model (ASM) is a kind of comparatively ripe man face characteristic point positioning method.It carries out Local Search with local texture model around unique point, and by the shape of the model constrained feature point set composition of global statistics, the two iterates, and finally converges to optimum shape.
Active shape model (ASM) is based upon on the basis of PDM (points distribution models), by the statistical information that the unique point of training image sample acquisition training image sample distributes, and obtain the change direction that unique point allows to exist, realize the position finding characteristic of correspondence point on target image.The position of the unique point that training sample needs manual mark all, the coordinate of recording feature point, and the proper vector that the local gray level model calculating each Feature point correspondence adjusts as local feature region.The model trained is being placed on target image, when finding the next position of each unique point, adopt local gray level model to find local gray level model mahalanobis distance is minimum on current signature point assigned direction unique point is about to the position moved to as current signature point, be called suggestedpoint, find all points just can obtain the shape of a search, then by current model by adjustment parameter make current model most probable similar adjust to suggestshape, iteration until realize convergence.
(3) DOG wave filter, in computer vision, Gaussian difference (DifferenceofGaussians, be called for short " DOG ") be a kind of algorithm blurred picture of an original-gray image being carried out strengthen from another width gray level image, by DOG to reduce the blur level of blurred picture.This blurred picture is by original-gray image being obtained through carrying out convolution with the gaussian kernel of various criterion difference.Carry out Gaussian Blur by gaussian kernel and can only suppress high-frequency information.The spatial information contained in the frequency band that another width can keep kept in two images is deducted from piece image.Like this, DOG wave filter is just equivalent to the bandpass filter that can be removed the every other frequency information except those are retained the frequency of getting off in original image.
Step S3, face multi-instance learning are trained
This step is the emphasis of the inventive method and key, mainly comprises the many exemplary definition of facial image, many exemplary characteristics extract and many exemplary characteristics merge three sub-steps.
The many exemplary definition of S31, facial image
According to China second-generation identity card falsely use may exist in checking hair style, the colour of skin, the change such as cosmetic and micro-shaping, adopt suitable face many exemplary definition scheme.
Facial image is divided into two kinds of level conditions by the present invention: the first level conditions and the second level conditions, respectively corresponding overall situation and partial situation.To give a definition 3 Global Face example images in the first level conditions, as shown in Figure 2.To give a definition 12 local facial's example images in the second level conditions, as shown in Figure 3.Global Face example under first level conditions remains the full local features such as facial contour shape, has robustness to face age, resolution and make-up and beauty change.Local facial's example images under second level conditions remains the crucial local facial's information of face, has robustness to face hair style, expression and micro-shaping change.Such as, operation of artificial double-fold eyelid or augmentation rhinoplasty Post operation must make some regional area change thus arrive the extraction affecting global characteristics, but major part local still remains unchanged.Therefore, the exemplary definition method under the second level conditions is adopted can to overcome the impact of micro-shaping change.
S32, many exemplary characteristics are extracted
In order to make the feature of extraction, there is robustness and complementarity, the present invention utilizes classical LBP (LocalBinaryPattern) and SIFT (ScaleInvariantFeatureTransform) (or SURF) algorithm to extract many exemplary characteristics respectively, wherein LBP extracts face textural characteristics, and SIFT (or SURF) algorithm extracts face direction and scale feature.
LBP (localbinarypatterns, local binary patterns) be a kind of feature extraction algorithm of simple and effective Texture classification, for face texture feature extraction, LBP is a kind of operator being used for Description Image Local textural feature, and it has the significant advantage such as rotational invariance and gray scale unchangeability.The various LBP patterns of facial image, the texture of each representative region can be embodied more clearly, desalinated again the feature for the little smooth region of researching value simultaneously, reduce the dimension of feature simultaneously, LBP operator more effectively can eliminate the impact of illumination on image, as long as the deficient change of illumination is to change the magnitude relationship between two some pixel values, so the value of LBP operator can not change, so to a certain extent, recognizer based on LBP solves the problem of illumination variation, but when image irradiation change is uneven, magnitude relationship between each pixel is destroyed, corresponding LBP pattern also just there occurs change.
SIFT (ScaleInvariantfeaturetransform) is a kind of algorithm detecting local feature, this algorithm is by asking the unique point (interestpoints in a width figure, orcornerpoints) and about the descriptor of scale and orientation obtain feature and carry out Image Feature Point Matching, 1), the generation of metric space obtain good result, detailed step divides as follows:; 2) yardstick spatial extrema point, is detected; 3), accurately extreme point is located; 4), be each key point assigned direction parameter; 5), the generation of key point descriptor.
Classical LBP and SIFT method is adopted to carry out feature extraction respectively to 15 the face example images defined in above-mentioned steps S31.LBP feature has gray-scale intensity unchangeability, and simultaneously in order to make LBP have rotational invariance, the present invention adopts invariable rotary schema extraction LBP feature of equal value; Meanwhile, in order to make LBP have more robustness, the present invention further adopts multiple dimensioned piecemeal LBP feature extracting method.Because LBP mainly extracts the Local textural feature of example, and SIFT mainly extracts face shape feature, and therefore two kinds of feature extracting methods have complementarity.Meanwhile, two kinds of methods all have fast, can differentiate, the good characteristic such as to rotate and intensity of illumination is constant.After LBP and SIFT process is carried out to 15 face example images, 30 exemplary characteristics vectors can be obtained.
S33, many exemplary characteristics merge
The object of this step is the equity (weights) calculating each exemplary characteristics vector, for last differentiation provides foundation.Each example equity is static equity and dynamic equity sum.
Because different its distinguishing ability of example different characteristic is different.Therefore, the present invention proposes the weights that Allocation of Equity system that static and dynamic Status combines calculates each example under two kinds of different characteristics respectively.Wherein, static equity obtains according to great amount of samples off-line training, is changeless original stock; Dynamic equity calculates according to online pairing image self character, is the additional stock of dynamic change.
A. static Allocation of Equity
First, collect some to mating facial image to (on-the-spot and I.D. surface), exemplary, collect 1000 pairs of facial images pair in the present embodiment, wherein 500 to being same person, and 500 to being different people, as training sample.
Then, by above step process, 30 right exemplary characteristics vectors of each image are obtained to all training samples, and calculates all images to the similarity s between each example.
Finally, count differentiation threshold values φ and the resolution of each example according to the similarity between each example, count the degree of confidence static equity exemplarily of each example according to resolution, be designated as Ω.
B. dynamic Allocation of Equity
Dynamic equity, according to embody rule environment dynamic assignment, has adaptivity.The dynamic equity of each example is determined confidence level three factors by the right mutual information of the entropy of example image, example and example.
1. for given its entropy of example image I (x, y) be:
E [ I ( x , y ) ] = - Σ i = 1 N g p i l o g ( p i )
Wherein, p ibe i-th ththe probability of gray level, N gfor gray level sum.Entropy is larger, and show that it is more conducive to differentiating, its Allocation of Equity will be more.
2. for given example image to { I 1(x, y): I 2(x, y) }, its mutual information is:
M I ( I 1 , I 2 ) = Σ x ∈ I 1 Σ y ∈ I 2 p ( x , y ) l o g ( p ( x , y ) p 1 ( x ) p 2 ( y ) )
Wherein, p (x, y) is I 1and I 2joint probability distribution, p 1(x), p 2y () is respectively I 1and I 2marginal probability distribution.Concrete calculating is convertible into the account form of combination entropy and conditional entropy.Example image between mutual information larger, illustrate that it is that the possibility of same people is larger, its Allocation of Equity will be more.
3. for given example image to { I 1(x, y): I 2(x, y) }, its confidence level is:
C ( I 1 , I 2 ) = 2 | s - φ | φ
Wherein, s is the similarity that example image is right, and φ is the threshold values of example.The confidence level that example image is right is larger, and its Allocation of Equity will be more.
According to above static equity and dynamic equity, the final equity of each example is:
w i=Ω i+(E i 2′MI i+C i)i∈[1,2,…,30]
Step S4, I.D. falsely use checking
First, according to the right similarity s of each example image with differentiate that threshold values φ obtains the differentiation result of each example, be determined as and same be designated as+1, what be determined as different people is designated as-1.
Then, differentiate that result carries out equity ballot according to each example, the poll value merging statistics carries out face consistency checking:
Wherein, w ifor the equity value of each example; ω is adjustable parameter.
Many examples merge criterion and have application flexibility.Such as, when extreme loose application scenario, one-ticket pass can be adopted to cross system, as long as namely there is an example to think same person, be just verified, now ω=0; In extremely strict application scenario, democracy can be adopted to vote and to make, namely have example over half to think same person, be just verified, now ω=1/2; In general application scenario, have and think same person more than 1/3 example, be just verified, now ω=1/3, exemplary, ω=1/3 in the present embodiment.
Above-described embodiment is the present invention's preferably embodiment; but embodiments of the present invention are not restricted to the described embodiments; change, the modification done under other any does not deviate from Spirit Essence of the present invention and principle, substitute, combine, simplify; all should be the substitute mode of equivalence, be included within protection scope of the present invention.

Claims (10)

1., based on a face alignment verification method for multi-instance learning, it is characterized in that, comprise the following steps:
Facial image pre-treatment step, carries out being normalized to after Face datection and key point are located to two width images of comparison onesize, and carries out photo-irradiation treatment;
Face multi-instance learning training step, many exemplary definition, the extraction of many exemplary characteristics and many exemplary characteristics Fusion training and study are carried out to facial image, calculate the degree of confidence of each exemplary characteristics vector, and calculate the weight of this example in entirety differentiates with this, i.e. equity;
To each example, face verification step, differentiates that result is by the ballot of its equity, and arrange poll statistical criteria flexibly and carry out final face verification.
2. the face verification method based on multi-instance learning according to claim 1, is characterized in that, described facial image pre-treatment step is specially:
Carry out Face datection by AdaBoost algorithm or degree of deep learning algorithm and extract clean facial image; Adopt face key point location algorithm to orient face key point position coordinates to go forward side by side pedestrian's face alignment normalization; DoG wave filter is adopted to carry out facial image illumination process.
3. the face verification method based on multi-instance learning according to claim 1, is characterized in that, described face multi-instance learning training step is specially:
Facial image many exemplary definition sub-step, for may exist in comparison hair style, the colour of skin, cosmetic and micro-shaping change, adopt suitable face many exemplary definition scheme;
Many exemplary characteristics extract sub-step, adopt LBP extract the textural characteristics of face example and adopt SIFT or SURF algorithm to extract direction and the scale feature of face example, make the face example image feature of extraction have robustness and complementarity;
Many exemplary characteristics fusant step, calculates the equity of each face example images proper vector, for last comprehensive distinguishing provides foundation.
4. the face verification method based on multi-instance learning according to claim 3, is characterized in that, described face many exemplary definition scheme is specific as follows:
Facial image is divided into the second level conditions of the first level conditions of the corresponding overall situation and corresponding local, wherein said first level conditions is given a definition 3 Global Face example images, and described second level conditions is given a definition 12 local facial's example images.
5. the face verification method based on multi-instance learning according to claim 4, it is characterized in that, LBP method and SIFT (or SURF) algorithm is adopted to carry out the extraction of many exemplary characteristics respectively to 15 the face example images defined in described facial image many exemplary definition sub-step, wherein said LBP method extracts face textural characteristics, and described SIFT (or SURF) algorithm extracts face direction and scale feature.
6. the face verification method based on multi-instance learning according to claim 3, it is characterized in that, the equity of described facial image proper vector comprises static equity and dynamic equity, wherein, described static equity obtains according to great amount of samples off-line training, it is changeless original stock, described dynamic equity calculates according to online pairing image self character, it is the additional stock of dynamic change, the weights of each face example images under the Allocation of Equity system that the equity of described facial image proper vector adopts static equity to combine with dynamic equity calculates two kinds of different characteristics respectively.
7. the face verification method based on multi-instance learning according to claim 6, is characterized in that, described many exemplary characteristics fusant step is specially:
Calculate the static equity of each face example images proper vector and dynamic equity respectively, being calculated as follows of wherein said static equity:
Collecting some to mating facial image to as training sample, by described many exemplary characteristics, sub-step being extracted to described training sample and extracts 30 exemplary characteristics vectors, and calculating all images to the similarity s between each example;
Count differentiation threshold values φ and the resolution of each example according to the similarity s between described each example, count the degree of confidence static equity exemplarily of each example according to resolution, be designated as Ω.
8. the face verification method based on multi-instance learning according to claim 7, is characterized in that, described many exemplary characteristics fusant step is specially:
Calculate the static equity of each face example images proper vector and dynamic equity respectively, wherein said dynamic equity is mated the right mutual information of facial image and example by the entropy of facial image example, example and is mated the right confidence level of facial image three factors and determine;
Wherein, for given facial image example I (x, y), its entropy is:
E [ I ( x , y ) ] = - Σ i = 1 N g p i l o g ( p i )
Wherein, p ibe i-th ththe probability of gray level, N gfor gray level sum;
For given example coupling facial image to { I 1(x, y): I 2(x, y) }, its mutual information is:
M I ( I 1 , I 2 ) = Σ x ∈ I 1 Σ y ∈ I 2 p ( x , y ) l o g ( p ( x , y ) p 1 ( x ) p 2 ( y ) )
Wherein, p (x, y) is I 1and I 2joint probability distribution, p 1(x), p 2y () is respectively I 1and I 2marginal probability distribution;
For given example coupling facial image to { I 1(x, y): I 2(x, y) }, its confidence level is:
C ( I 1 , I 2 ) = 2 | s - φ | φ
Wherein, s is the similarity that example image is right, and φ is the threshold values of example.
9. the testimony of a witness face verification method based on multi-instance learning according to claim 8, is characterized in that, the equity of described facial image exemplary characteristics vector is: w ii+ (E i 2× MI i+ C i) i ∈ [1,2 ..., 30].
10. the face verification method based on multi-instance learning according to claim 1, is characterized in that, described many examples merge criterion and are:
Equity ballot is carried out to the differentiation result of each example, and adopts poll statistical project flexibly to carry out last checking.
CN201511020705.0A 2015-12-29 2015-12-29 Face alignment verification method based on multi-instance learning Active CN105469076B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511020705.0A CN105469076B (en) 2015-12-29 2015-12-29 Face alignment verification method based on multi-instance learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511020705.0A CN105469076B (en) 2015-12-29 2015-12-29 Face alignment verification method based on multi-instance learning

Publications (2)

Publication Number Publication Date
CN105469076A true CN105469076A (en) 2016-04-06
CN105469076B CN105469076B (en) 2019-05-03

Family

ID=55606747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511020705.0A Active CN105469076B (en) 2015-12-29 2015-12-29 Face alignment verification method based on multi-instance learning

Country Status (1)

Country Link
CN (1) CN105469076B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886573A (en) * 2017-01-19 2017-06-23 博康智能信息技术有限公司 A kind of image search method and device
CN107066969A (en) * 2017-04-12 2017-08-18 南京维睛视空信息科技有限公司 A kind of face identification method
CN107625527A (en) * 2016-07-19 2018-01-26 杭州海康威视数字技术股份有限公司 A kind of lie detecting method and device
CN107766774A (en) * 2016-08-17 2018-03-06 鸿富锦精密电子(天津)有限公司 Face identification system and method
CN108022260A (en) * 2016-11-04 2018-05-11 株式会社理光 A kind of face alignment method, device and electronic equipment
CN108875542A (en) * 2018-04-04 2018-11-23 北京旷视科技有限公司 A kind of face identification method, device, system and computer storage medium
CN108932758A (en) * 2018-06-29 2018-12-04 北京金山安全软件有限公司 Sign-in method and device based on face recognition, computer equipment and storage medium
CN110516649A (en) * 2019-09-02 2019-11-29 南京微小宝信息技术有限公司 Alumnus's authentication method and system based on recognition of face
CN110956095A (en) * 2019-11-12 2020-04-03 湖南大学 Multi-scale face detection method based on corner skin color detection
CN111128178A (en) * 2019-12-31 2020-05-08 上海赫千电子科技有限公司 Voice recognition method based on facial expression analysis
WO2020228694A1 (en) * 2019-05-13 2020-11-19 长沙智能驾驶研究院有限公司 Camera pose information detection method and apparatus, and corresponding intelligent driving device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102479320A (en) * 2010-11-25 2012-05-30 康佳集团股份有限公司 Face recognition method and device as well as mobile terminal
CN103745207A (en) * 2014-01-27 2014-04-23 中国科学院深圳先进技术研究院 Feature extraction method and device for human face identification
CN104778457A (en) * 2015-04-18 2015-07-15 吉林大学 Video face identification algorithm on basis of multi-instance learning
CN105138968A (en) * 2015-08-05 2015-12-09 北京天诚盛业科技有限公司 Face authentication method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102479320A (en) * 2010-11-25 2012-05-30 康佳集团股份有限公司 Face recognition method and device as well as mobile terminal
CN103745207A (en) * 2014-01-27 2014-04-23 中国科学院深圳先进技术研究院 Feature extraction method and device for human face identification
CN104778457A (en) * 2015-04-18 2015-07-15 吉林大学 Video face identification algorithm on basis of multi-instance learning
CN105138968A (en) * 2015-08-05 2015-12-09 北京天诚盛业科技有限公司 Face authentication method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邓剑勋: ""多示例图像检索算法研究及在人脸识别中的应用"", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107625527B (en) * 2016-07-19 2021-04-20 杭州海康威视数字技术股份有限公司 Lie detection method and device
CN107625527A (en) * 2016-07-19 2018-01-26 杭州海康威视数字技术股份有限公司 A kind of lie detecting method and device
CN107766774A (en) * 2016-08-17 2018-03-06 鸿富锦精密电子(天津)有限公司 Face identification system and method
CN108022260A (en) * 2016-11-04 2018-05-11 株式会社理光 A kind of face alignment method, device and electronic equipment
CN108022260B (en) * 2016-11-04 2021-10-12 株式会社理光 Face alignment method and device and electronic equipment
CN106886573A (en) * 2017-01-19 2017-06-23 博康智能信息技术有限公司 A kind of image search method and device
CN107066969A (en) * 2017-04-12 2017-08-18 南京维睛视空信息科技有限公司 A kind of face identification method
CN108875542A (en) * 2018-04-04 2018-11-23 北京旷视科技有限公司 A kind of face identification method, device, system and computer storage medium
CN108875542B (en) * 2018-04-04 2021-06-25 北京旷视科技有限公司 Face recognition method, device and system and computer storage medium
CN108932758A (en) * 2018-06-29 2018-12-04 北京金山安全软件有限公司 Sign-in method and device based on face recognition, computer equipment and storage medium
WO2020228694A1 (en) * 2019-05-13 2020-11-19 长沙智能驾驶研究院有限公司 Camera pose information detection method and apparatus, and corresponding intelligent driving device
CN110516649A (en) * 2019-09-02 2019-11-29 南京微小宝信息技术有限公司 Alumnus's authentication method and system based on recognition of face
CN110516649B (en) * 2019-09-02 2023-08-22 南京微小宝信息技术有限公司 Face recognition-based alumni authentication method and system
CN110956095A (en) * 2019-11-12 2020-04-03 湖南大学 Multi-scale face detection method based on corner skin color detection
CN111128178A (en) * 2019-12-31 2020-05-08 上海赫千电子科技有限公司 Voice recognition method based on facial expression analysis

Also Published As

Publication number Publication date
CN105469076B (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN105469076A (en) Face comparing verification method based on multi-instance learning
Galdámez et al. A brief review of the ear recognition process using deep neural networks
CN104517104B (en) A kind of face identification method and system based under monitoring scene
CN100395770C (en) Hand-characteristic mix-together identifying method based on characteristic relation measure
CN101763503B (en) Face recognition method of attitude robust
CN106971174A (en) A kind of CNN models, CNN training methods and the vein identification method based on CNN
Chen et al. Human ear detection from 3D side face range images
CN103942577A (en) Identity identification method based on self-established sample library and composite characters in video monitoring
CN102270308B (en) Facial feature location method based on five sense organs related AAM (Active Appearance Model)
Medioni et al. Identifying noncooperative subjects at a distance using face images and inferred three-dimensional face models
CN101540000B (en) Iris classification method based on texture primitive statistical characteristic analysis
CN101251894A (en) Gait recognizing method and gait feature abstracting method based on infrared thermal imaging
Rouhi et al. A review on feature extraction techniques in face recognition
CN106599785B (en) Method and equipment for establishing human body 3D characteristic identity information base
CN101246543A (en) Examiner identity appraising system based on bionic and biological characteristic recognition
CN103268497A (en) Gesture detecting method for human face and application of gesture detecting method in human face identification
CN101261677A (en) New method-feature extraction layer amalgamation for face and iris
CN103218609A (en) Multi-pose face recognition method based on hidden least square regression and device thereof
CN104794441B (en) Human face characteristic positioning method based on active shape model and POEM texture models under complex background
Bouchaffra et al. Structural hidden Markov models for biometrics: Fusion of face and fingerprint
CN105809113B (en) Three-dimensional face identification method and the data processing equipment for applying it
Bagchi et al. Robust 3D face recognition in presence of pose and partial occlusions or missing parts
CN107480586A (en) Bio-identification photo bogus attack detection method based on human face characteristic point displacement
Gawali et al. 3d face recognition using geodesic facial curves to handle expression, occlusion and pose variations
Soltana et al. Comparison of 2D/3D features and their adaptive score level fusion for 3D face recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
CB02 Change of applicant information

Address after: 523000, building 6, 310, 311 and 3, 312 South Industrial Road, Songshan hi tech Industrial Development Zone, Guangdong, Dongguan

Applicant after: GUANGDONG MICROPATTERN SOFTWARE CO., LTD.

Address before: 6, building 310, 312, 311 and 3, South Industrial Road, Songshan hi tech Industrial Development Zone, Guangdong, Dongguan, 523000

Applicant before: Dongguan MicroPattern Corporation

COR Change of bibliographic data
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant