CN109033938A - A kind of face identification method based on ga s safety degree Fusion Features - Google Patents
A kind of face identification method based on ga s safety degree Fusion Features Download PDFInfo
- Publication number
- CN109033938A CN109033938A CN201810557864.1A CN201810557864A CN109033938A CN 109033938 A CN109033938 A CN 109033938A CN 201810557864 A CN201810557864 A CN 201810557864A CN 109033938 A CN109033938 A CN 109033938A
- Authority
- CN
- China
- Prior art keywords
- loss
- image
- training
- loss function
- training sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Abstract
This application discloses a kind of face identification methods based on ga s safety degree Fusion Features, comprising: a global image and at least two topographies A, are intercepted in each training sample image;B, polynary loss (multi-loss) function progress model training is respectively adopted to each image intercepted and obtains corresponding model;Wherein, the multi-loss function is that angular region Classification Loss (a-softmax loss) function and center are lost the combination of (center loss) function and obtained;C, fusion and dimensionality reduction are carried out using each model that ternary loss triplet loss function obtains training, and obtains the ultimate depth feature of the training sample image.Using technical solution disclosed in the present application, it is able to solve the data carried out in face recognition process using CNN, human face posture and the problem of Model Fusion, reaches preferable recognition of face effect.
Description
Technical field
This application involves technical field of face recognition, in particular to a kind of recognition of face based on ga s safety degree Fusion Features
Method.
Background technique
The existing recognition of face carried out by deep learning method has been achieved for a series of quantum jump.It is simple below
Introduce several prior arts:
A kind of existing method be distance by the discrepancy mappings between facial image pair, training criterion is the same classification
Image pair between similarity distance want small, and the similarity distance between different classes of image pair is big.
Another existing method be by a nonlinear transformation so that identical image between different images pair away from
There is a differentiable boundary from centre, this method needs input picture to as input.
In another existing method, propose that (angular-softmax loss, is abbreviated as angular region Classification Loss function
A-softmax loss), softmax loss is improved, it is assumed that weight vectors in softmax loss be unit to
Amount, deviation are set as zero, convert angle problem for distance problem, further increase boundary in angle, so that categorised demarcation line
It is bigger, to achieve the purpose that larger to distinguish each class.
In another existing method, it will identify and verifying supervisory signals combine, it is special to more ga s safety degrees to learn
Sign.The validity of ternary loss (triplet loss) is also demonstrated in one approach.By deep learning, positive sample and ginseng
The distance between this is minimized in the same old way, and the distance between negative sample and sample for reference maximize.This method is in LFW
(Labeled Faces in the Wild, generic scenario face recognition database) and YTF (YouTube faces) data set
On reach good effect.
In there is a method in which, the concept of center loss (center loss) is proposed, is calculated in one for each class
The heart, the image by controlling each class assembles the image in class to the distance of class center, to reach ga s safety degree
Purpose, in conjunction with softmax loss, so that the depth characteristic that learns while meeting separability and ga s safety degree.
Present inventor passes through the analysis to the prior art, it is believed that: existing most people face recognition method is all
Directly classified to recognition of face problem using softmax loss.In this case, the judgement of classification, that is, most
The effect of the full articulamentum of the latter is more like a linear classifier, the depth characteristic for being used to describe object is mapped to separable
Feature vector.Due to that can not include all categories in test set in recognition of face problem, in training set, this requires instructions
Practice level-learning to depth characteristic can accomplish ga s safety degree to the maximum extent, it may be assumed that not only need different classes of feature
It separates as far as possible, it is also necessary to which each class another characteristic is concentrated as far as possible.This just needs to construct more efficient loss function for learning
Differentiable feature.Because stochastic gradient descent (stochastic gradient descent, abbreviation SGD) is based on small
Data cell (mini-batch) the global of reaction depth feature cannot be distributed well come what is done.Further, since training
Data are very huge, inputted in an iteration all training samples be it is unpractical, as an optional scheme, relatively
Loss (contrastive loss) and triplet loss realize respectively describe image to or image triple loss
Function, however, for image pattern, training image to or the quantity of triple will increase more, therefore, can not keep away
It can bring convergence slowly with exempting from and unstable problem.By carefully choose image to and triple, this problem can be by part
It solves, but substantially increases the complexity of calculating again in this way, meanwhile, training process is also extremely not convenient.
In addition, outdoors in recognition of face, face is due to illumination, posture, accessories (such as: whether being branded as, wear glasses)
Deng influence, single from picture, the difference of same people is possible to the difference greater than different people, and especially posture is to face
It influences very big.The mode of a simple model training is difficult to adapt to the scene of this big attitudes vibration.
The mode of existing multi-model fusion, the depth characteristic of multiple models has been directly connected to as the final of image
Feature, not only characteristic dimension is higher, but also has the problems such as redundancy between feature, affects classifying quality instead.Some methods will
After feature connects and PCA (principal component analysis, Principal Component Analysis) processing, this kind of side have been it
Method has certain effect, but implements and be more troublesome, and the feature for having done PCA processing may face ga s safety degree difference again
Problem.
Existing face identification method, in order to reach the stability of result, can be combined when the last classification of progress judges
Original image and symmetrical image carry out, some methods are to be directly connected to the feature of two images, some methods are will be special
It solicits averagely, these methods can obtain certain effect promoting, and still, the accuracy rate of classification judgement still has to be hoisted.
Summary of the invention
It, can with realize face characteristic this application provides a kind of face identification method based on ga s safety degree Fusion Features
Separation property and ga s safety degree, and reach preferable recognition of face effect.
This application discloses a kind of face identification methods based on ga s safety degree Fusion Features, comprising:
A, a global image and at least two topographies are intercepted in each training sample image;
B, it polynary loss multi-loss function is respectively adopted to each image intercepted carries out model training and obtain pair
The model answered;Wherein, the multi-loss function be angular region Classification Loss angular-softmax loss function and
Loss center loss function combination in center obtains;
C, fusion and dimensionality reduction are carried out using each model that ternary loss triplet loss function obtains training, and obtained
To the ultimate depth feature of the training sample image.
Preferably, the multi-loss function are as follows:
L=Ls+γLc
Wherein: L indicates multi-loss function;
LsIndicate a-softmax loss function;
LcIndicate center loss function;
γ is weight coefficient.
Preferably,
Wherein: xi∈RdIndicate i-th of depth characteristic, d indicates the dimension of depth characteristic;
yiIndicate classification belonging to i-th of depth characteristic;
Wj∈RdIt is the weight matrix W ∈ R of the last one full articulamentumd×nJth column, W is two-dimensional matrix, a dimension
It is d, another dimension is n, and n is the classification number of classification;
b∈RnIt is bias term;
M is the sample number of classification;
θijIt is characterized vector xiWith weight matrix jth column WjThe angle of vector.
Preferably,
xi∈RdIndicate i-th of depth characteristic, d indicates the dimension of depth characteristic;
yiIndicate classification belonging to i-th of depth characteristic;
M is the sample number of mini-batch;
It is yiThe center of a classification.
Preferably, the Lc is based on xiGradient andUpdate method it is as follows:
Wherein: if meeting the condition in the bracket of δ (), δ ()=1, otherwise δ ()=0.
Preferably, the progress model training in the B includes carrying out recycling behaviour as follows respectively to each image intercepted
Make:
Initialize convolutional layer parameter θc, loss layer parameter W and { cj| j=1,2 ... n }, alpha, gamma and learning rate μ are initialized, it will
The number of iterations t is set to 0;
Using multi-loss function to the training data { x of inputiCarry out model training after, obtain model parameter θc;
If training does not restrain:
t←t+1
Calculate associated losses
To each training sample, passback error rate is calculated
Undated parameter W:
Undated parameter cj:
Undated parameter θc:
Until convergence, end loop.
Preferably, the C includes:
The depth characteristic of each training sample image is extracted from each model that the training obtains, and will be extracted
Depth characteristic connects, and as the input of triplet loss function, carries out fusion and dimensionality reduction through triplet loss function
Afterwards, the ultimate depth feature of the training sample image is obtained.
Preferably, this method further include:
Bilateral symmetry operation is carried out to the training sample image and obtains its symmetrical image;
Ultimate depth feature is extracted according to B and C respectively to training sample image and its corresponding symmetrical image;
For every one-dimensional characteristic, more original training sample image and the corresponding characteristic value size of symmetrical image, and select
The biggish dimensional feature as in final feature of characteristic value is selected to obtain final by being compared every one-dimensional characteristic
Then feature vector calculates distance to different pictures with final feature vector, judges whether two images are same people.
Preferably, before the A further include: precise positioning is carried out to the training sample image, so that training sample
The fixation key point information of the fixation position storage face of image;
The A includes: to intercept the global image and Local map according to key point position in each training sample image
Picture.
Preferably, this method further include:
Training sample image is increased using perturbation motion method and is disturbed, the perturbation motion method includes but is not limited to: image is random
Symmetrically, illumination, color.
The problem of present application addresses feature ga s safety degrees in recognition of face proposes a kind of new loss function, polynary
Loss function (multi-loss), while having reached the separability and ga s safety degree of feature.The application is in a-softmax
CNN training is carried out under loss function and center loss function dual supervision, and two kinds of losses are reached by one weight of setting
Between balance.For intuitively, a-softmax loss separates different classes of depth characteristic as far as possible, and center
Loss is gathered in same category of feature around class center as far as possible.By the common supervision of two kinds of loss functions, so that
Class inherited increases, meanwhile, variation reduces in class.Therefore, the ga s safety degree of depth characteristic greatly enhances.Pass through center
The common supervision of loss and a-softmax loss, the feature learnt have very high ga s safety degree, thus allow for steady
Fixed recognition of face.
Meanwhile the application utilizes multiple models, i.e., full face model and partial model combine, can learn to each classification
Global and local features at different levels from thick to thin carry out finer description to each classification, and further increasing classification can area
Divide the characteristic of property.The problem of increasing for multiple model bring characteristic dimensions, the application is using triplet loss come to spy
Sign carries out dimensionality reduction, while the mapping of further ga s safety degree is carried out to the feature of multiple models, to reach better differentiation
Effect.
The present invention is better than the reason of other existing methods and is: 1) in training each model, using multi-loss,
A-softmax loss and center loss are combined, so that the feature learnt reaches separable and can distinguish, instructed simultaneously
It is convenient to practice;2) using multiple models come so that face recognition algorithms have stronger robustness in attitudes vibration;3) multiple moulds
The characteristic use triplet loss of type carries out dimensionality reduction and further mapping, reaches better ga s safety degree.
Detailed description of the invention
Fig. 1 is existing conventional CNN training process schematic diagram;
Fig. 2 is the processing flow schematic diagram of the applicant's face recognition method;
Fig. 3 is the flow diagram of the multiple dimensioned CNN model training of the application;
Fig. 4 is the schematic diagram that the application carries out multi-scale feature fusion using triplet loss.
Specific embodiment
It is right hereinafter, referring to the drawings and the embodiments, for the objects, technical solutions and advantages of the application are more clearly understood
The application is described in further detail.
Currently, convolutional neural networks (CNN:Convolutional Neural Network) have been widely used in
Visual field greatly improves performance of classification problem, including object detection, scene Recognition and action recognition etc..CNN is main
It is to be based on a large amount of data and end to end learning framework, using a large amount of data, is classified by feature learning and prediction, it will be former
Beginning data information is mapped to depth characteristic.However, in recognition of face problem, due to lacking the public data comprising a large amount of faces
Collection, this has just largely limited to the performance of CNN network performance.
In conventional object classification problem, such as scene or action recognition, the classification of object to be identified are included in
In training data, that is to say, that be the identification problem that a closed set is closed;And for recognition of face problem, we can not
It is collected into proprietary face information in advance, therefore, recognition of face is the identification problem an of open set, and this requires depth
The feature practised is not only separable, and has distinction.Further, since data volume is very huge in CNN training, instruction
Experienced convenience, training time and the convergent speed of training also must be considered that.
Other than the problem of in terms of the data volume, the feature of face itself also brings peculiar problem to recognition of face.
For example face is very strong to the susceptibility of illumination, human face posture variation will cause face difference, this species diversity is same people's sometimes
It is even bigger than between different people on the face, that is to say, that: otherness of the same person under two kinds of postures of a front surface and a side surface, from
Two differences can be greater than but have difference of the people of certain similitude all under frontal pose by intuitively seeing, which increases people
The difficulty of face identification.
Due to the particularity of recognition of face problem, need to train multiple and different models to carry out comprehensive descision, this
Under background, how the result of multiple models to be preferably combined, and a problem for needing to further investigate.
For this purpose, the application proposes a kind of face identification method based on ga s safety degree Fusion Features, mainly solves utilization
CNN carries out data in face recognition process, human face posture and the problem of Model Fusion.Specifically, the present invention proposes one kind
New loss function, multivariate loss function (multi-loss), improves existing loss function, in conjunction with a-softmax
(center loss) function is lost at loss function and center, to reach the ga s safety degree of face;And utilize Global Face and part
Face combines, and multiple models is trained to reach the robustness to human face posture;Simultaneously for the fusion of multiple models, three are utilized
Member loss (triplet loss) function finally reaches preferable recognition of face effect to carry out fusion and the dimensionality reduction of feature.
Fig. 1 is existing conventional CNN training process schematic diagram 1.Referring to Fig. 1, routine CNN training process include: firstly,
Global Face is intercepted to face training data according to the key point position of positioning, is input to the convolutional neural networks meter being pre-designed
Depth characteristic is calculated, and predicts face classification, then, the category and face concrete class is input to loss function and calculate difference,
Convolutional neural networks weight is further updated by forecasted variances again, until model is restrained, finally output is final to learn iteration optimization
The model parameter arrived.
Face identification method proposed by the present invention based on ga s safety degree Fusion Features mainly comprises the steps that
Firstly, intercepting a global image and at least two topographies in each training sample image;
Then, multi-loss function progress model training is utilized respectively to each image intercepted and obtains corresponding mould
Type, wherein multi-loss function is that a-softmax loss function and center loss function are combined and obtained;
Finally, merging using each model that triplet loss function obtains training, while reaching dimensionality reduction
Effect obtains the ultimate depth feature of training sample image.
Specifically, the present invention is broadly divided into two processing stages, as shown in Figure 2:
First stage carries out multiple dimensioned model training: in conjunction with a-softmax loss and center loss respectively to single
Global and local model be trained;That is: world model's training, the training of partial model 1, the training of partial model 2 as shown in Figure 2
Etc..
More specifically, referring to the flow diagram of the multiple dimensioned CNN model training of the application as shown in Figure 3: the first stage
During global and local model training, the interception of global and local image is carried out to input picture first, then to all instructions
The overall situation or topography for practicing sample are trained, and obtain global and each local disaggregated model.
The fusion of second stage progress multi-model feature: multiple models are subjected to Fusion Features drop using triplet loss
Dimension.
More specifically, the signal of multi-scale feature fusion is carried out using triplet loss referring to the application as shown in Figure 4
Figure: second stage is the fusion of multiple models, i.e., to each training sample, extracts the depth that global and each partial model learns
Feature is spent, then connects feature, as trained input, is exercised supervision study, is obtained most using triplet loss
Fusion feature after whole dimensionality reduction, the ultimate depth feature as the sample.
Wherein, above-mentioned each stage follows the basic procedure of CNN training.
The technical detail of the application is described in detail below by specific embodiment:
Step 1: multiple dimensioned model training
The pretreatment of 1.1 training datas
In order to reach better training effect, it would be desirable to training data (alternatively referred to as: training image, training sample,
Training sample image etc.) pretreatment operation is carried out, precise positioning is carried out to image, so that people is stored in the fixation position of training image
The fix information of face specifically exactly needs to carry out face alignment.The key point information of each facial image is obtained first,
It may include: this five key points of left eye center, right eye center, nose, the left corners of the mouth and the right corners of the mouth, then according to key point
It sets, intercepts facial image.
This pre-treatment step is optional step.In addition, being by 5 on glasses, nose and mouth in the above illustration
A key point carries out crucial point location, in practical applications, can also carry out the positioning of other points or more point to reach face
Precise alignment.
The global and local image interception of 1.2 training datas
In the present embodiment, each training data is trained using global image and three topographies, wherein global
Image refers to the image of whole face informations including containing left eye, right eye, nose and mouth, three topographies be respectively with
The topography intercepted centered on left eye, nose and the left corners of the mouth.In this step, cut according to the facial image of each training sample
Global image and topography are taken, meanwhile, by the image normalization of each model to same size.
1.3 training datas increase disturbance
In order to increase the stability of model, need to increase training data some disturbances, increased disturbance in the present embodiment
It may include: image random symmetric etc..
This step is optional step.Other than using image symmetrical to increase disturbance, other perturbation motion methods can also be used,
Such as illumination, color etc. can specifically be selected according to practical problem.
1.4 global and local models are respectively trained
For recognition of face problem, deep learning to feature not only need to separate, and need be to have differentiation
Property.Due to we can not pre-collecting all people's face information, the face classification for judging is likely to be not include
In training set.The depth characteristic needs learnt be have distinction and be it is extensive enough, can be for classifying not
The new category met.There is the feature of distinction, has not only needed to be separable between class, but also need to be sufficiently compact in class.
The definition of Softmax loss function is as shown in formula 1:
Wherein: xi∈RdIndicate i-th of depth characteristic, d indicates the dimension of depth characteristic;
yiIndicate classification belonging to i-th of depth characteristic;
Wj∈RdIt is the weight matrix W ∈ R of the last one full articulamentumd×nJth column, W is two-dimensional matrix, a dimension
It is d, another dimension is n, and n is the classification number of classification;
b∈RnIt is bias term;
M is the sample number of classification.
For simplicity, it usually can be omitted bias term.
Assuming that | | Wj| |=1, bi=0, formula (1) becomes formula (2):
After increasing angular bounds, that is, a-softmax loss, formula (2) become (3):
Wherein, θijIt is characterized vector xiWith weight matrix jth column WjThe angle of vector.
Since a-softmax loss function only ensure that the separability of feature, so being learnt with a-softmax loss
To feature cannot completely effectively be used for recognition of face, therefore, present invention take advantage of that center loss and a-softmax
The loss function that loss is combined.
Shown in the definition such as formula (4) of Center loss function:
Wherein:It is yiThe center of a classification, xiAnd yiPhysical significance it is as previously described.Being defined by formula can
To find out, center loss features variation in class significantly.
In the ideal case, need to calculate center of the entire training sample set to obtain each classification, but due to
It is to be trained based on mini-batch in training process, therefore, the application has made some improvements formula (4).It is based on
Mini-batch is updated each center.Meanwhile in order to avoid some wrong target samples cause central point occur compared with
Big disturbance, the application introduce the learning rate that a parameter alpha carrys out control centre's point, and α is fixed as 0.5 in experiment, therefore in formula
Do not occur.Lc is based on x as a result,iGradient andUpdate method such as formula (5) and (6):
Wherein: if meeting the condition in the bracket of δ (), δ ()=1, otherwise δ ()=0.
The application introduces weight to balance two kinds of loss functions, shown in final loss function such as formula (7):
It can be seen from formula (7) as γ=0, loss function is reformed into only with a-softmax loss.
Specific training process is as follows:
Input: training data { xi, initialize convolutional layer parameter θc, loss layer parameter W and { cj| j=1,2 ... n }, initially
Change alpha, gamma and learning rate μ.The number of iterations t=0.
Output: θc。
If training does not restrain:
t←t+1;
Calculate associated losses
To each training sample, passback error rate is calculated
Undated parameter W:
Undated parameter cj:
Undated parameter θc:
Until convergence, end loop.
To in the present embodiment the overall situation and three partial models be trained all in accordance with the above process, obtain four it is multiple dimensioned
Model.
Step 2: the fusion of multiple models
Global and local totally four will be obtained for each sample image by the processing of the present embodiment above-mentioned steps 1
The feature of model needs to carry out the feature of this four models fusion and dimensionality reduction in step 2.
2.1 extract the feature of four models respectively and are attached
The depth characteristic of every sample image is extracted using four models that training obtains in first part, and by four groups
Depth characteristic connects, the input as the training of this step.
Input of the feature as CNN after 2.2 connections, utilizes triplet loss training dimensionality reduction feature
For the feature of the training sample extracted using 2.1 steps as input, it is trained to carry out CNN, using triplet loss as
Loss function, the feature after obtaining final dimensionality reduction.Formula (8) are shown in wherein triplet loss function calculating:
Wherein: α is a number greater than 0, and (a, p, n) is a triple, includes a post image a, a positive sample
This image p and negative sample image a n, p ≠ a and n ≠ a.
Step 3: test image discriminant classification
When testing, in order to increase the stability of feature, the present embodiment considers image and its symmetrical image simultaneously.It is first
Bilateral symmetry operation first is carried out to image, final depth is then extracted respectively according to step 1 and step 2 to image and symmetrical image
Degree feature is compared by original image and which corresponding characteristic value of symmetrical image is larger for every one-dimensional characteristic, select characteristic value compared with
Big obtains final feature vector by comparing to every one-dimensional characteristic as the dimensional feature in final feature.Then
Distance is calculated to different pictures with final feature vector again, judges whether two images are same people, to complete people
The task of face identification.
In the test picture discriminant classification of step 3, every one-dimensional characteristic of the present embodiment in original image and symmetrical picture
In comparing, the greater has been selected, can also take and the modes such as be weighted and/or be connected directly to feature to carry out test chart
The discriminant classification of piece, distance calculating method can also be there are many modes, and details are not described herein.
The foregoing is merely the preferred embodiments of the application, not to limit the application, all essences in the application
Within mind and principle, any modification, equivalent substitution, improvement and etc. done be should be included within the scope of the application protection.
Claims (10)
1. a kind of face identification method based on ga s safety degree Fusion Features characterized by comprising
A, a global image and at least two topographies are intercepted in each training sample image;
B, to each image intercepted be respectively adopted polynary loss multi-loss function carry out model training obtain it is corresponding
Model;Wherein, the multi-loss function is angular region Classification Loss angular-softmax loss function and center
Loss center loss function combination obtains;
C, fusion and dimensionality reduction are carried out using each model that ternary loss triplet loss function obtains training, and obtains institute
State the ultimate depth feature of training sample image.
2. the method according to claim 1, wherein the multi-loss function are as follows:
L=Ls+γLc
Wherein: L indicates multi-loss function;
LsIndicate a-softmax loss function;
LcIndicate center loss function;
γ is weight coefficient.
3. according to the method described in claim 2, it is characterized by:
Wherein: xi∈RdIndicate i-th of depth characteristic, d indicates the dimension of depth characteristic;
yiIndicate classification belonging to i-th of depth characteristic;
Wj∈RdIt is the weight matrix W ∈ R of the last one full articulamentumd×nJth column, W is two-dimensional matrix, and a dimension is d, separately
One dimension is n, and n is the classification number of classification;
b∈RnIt is bias term;
M is the sample number of classification;
θijIt is characterized vector xiWith weight matrix jth column WjThe angle of vector.
4. according to the method described in claim 3, it is characterized by:
xi∈RdIndicate i-th of depth characteristic, d indicates the dimension of depth characteristic;
yiIndicate classification belonging to i-th of depth characteristic;
M is the sample number of mini-batch;
It is yiThe center of a classification.
5. according to the method described in claim 4, it is characterized in that, the Lc is based on xiGradient andUpdate method such as
Under:
Wherein: if meeting the condition in the bracket of δ (), δ ()=1, otherwise δ ()=0.
6. according to the method described in claim 5, it is characterized in that, the progress model training in the B includes to being intercepted
Each image carries out following circulate operation respectively:
Initialize convolutional layer parameter θc, loss layer parameter W and { cj| j=1,2 ... n }, alpha, gamma and learning rate μ are initialized, by iteration
Number t is set to 0;
Using multi-loss function to the training data { x of inputiCarry out model training after, obtain model parameter θc;
If training does not restrain:
t←t+1
Calculate associated losses
To each training sample, passback error rate is calculated
Undated parameter W:
Undated parameter cj:
Undated parameter θc:
Until convergence, end loop.
7. the method according to claim 1, wherein the C includes:
Extract the depth characteristic of each training sample image from each model that the training obtains, and by extracted depth
Feature connects, and obtains after triplet loss function carries out fusion and dimensionality reduction as the input of triplet loss function
To the ultimate depth feature of the training sample image.
8. method according to any one of claims 1 to 7, which is characterized in that this method further include:
Bilateral symmetry operation is carried out to the training sample image and obtains its symmetrical image;
Ultimate depth feature is extracted according to B and C respectively to training sample image and its corresponding symmetrical image;
For every one-dimensional characteristic, more original training sample image and the corresponding characteristic value size of symmetrical image, and select spy
The biggish dimensional feature as in final feature of value indicative obtains final feature by being compared to every one-dimensional characteristic
Then vector calculates distance to different pictures with final feature vector, judges whether two images are same people.
9. method according to any one of claims 1 to 7, it is characterised in that:
Before the A further include: precise positioning is carried out to the training sample image, so that the fixed bit of training sample image
Set the fixation key point information of storage face;
The A includes: to intercept the global image and topography according to key point position in each training sample image.
10. method according to any one of claims 1 to 7, which is characterized in that this method further include:
Training sample image is increased using perturbation motion method and is disturbed, the perturbation motion method includes but is not limited to: image random symmetric,
Illumination, color.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810557864.1A CN109033938A (en) | 2018-06-01 | 2018-06-01 | A kind of face identification method based on ga s safety degree Fusion Features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810557864.1A CN109033938A (en) | 2018-06-01 | 2018-06-01 | A kind of face identification method based on ga s safety degree Fusion Features |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109033938A true CN109033938A (en) | 2018-12-18 |
Family
ID=64611950
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810557864.1A Pending CN109033938A (en) | 2018-06-01 | 2018-06-01 | A kind of face identification method based on ga s safety degree Fusion Features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109033938A (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109801636A (en) * | 2019-01-29 | 2019-05-24 | 北京猎户星空科技有限公司 | Training method, device, electronic equipment and the storage medium of Application on Voiceprint Recognition model |
CN109816001A (en) * | 2019-01-10 | 2019-05-28 | 高新兴科技集团股份有限公司 | A kind of more attribute recognition approaches of vehicle based on deep learning, device and equipment |
CN109902757A (en) * | 2019-03-08 | 2019-06-18 | 山东领能电子科技有限公司 | One kind being based on the improved faceform's training method of Center Loss |
CN109934197A (en) * | 2019-03-21 | 2019-06-25 | 深圳力维智联技术有限公司 | Training method, device and the computer readable storage medium of human face recognition model |
CN110009013A (en) * | 2019-03-21 | 2019-07-12 | 腾讯科技(深圳)有限公司 | Encoder training and characterization information extracting method and device |
CN110348320A (en) * | 2019-06-18 | 2019-10-18 | 武汉大学 | A kind of face method for anti-counterfeit based on the fusion of more Damage degrees |
CN110569809A (en) * | 2019-09-11 | 2019-12-13 | 淄博矿业集团有限责任公司 | coal mine dynamic face recognition attendance checking method and system based on deep learning |
CN110569826A (en) * | 2019-09-18 | 2019-12-13 | 深圳市捷顺科技实业股份有限公司 | Face recognition method, device, equipment and medium |
CN110705689A (en) * | 2019-09-11 | 2020-01-17 | 清华大学 | Continuous learning method and device capable of distinguishing features |
CN110765933A (en) * | 2019-10-22 | 2020-02-07 | 山西省信息产业技术研究院有限公司 | Dynamic portrait sensing comparison method applied to driver identity authentication system |
CN110929099A (en) * | 2019-11-28 | 2020-03-27 | 杭州趣维科技有限公司 | Short video frame semantic extraction method and system based on multitask learning |
CN111126307A (en) * | 2019-12-26 | 2020-05-08 | 东南大学 | Small sample face recognition method of joint sparse representation neural network |
CN111177469A (en) * | 2019-12-20 | 2020-05-19 | 国久大数据有限公司 | Face retrieval method and face retrieval device |
CN111209839A (en) * | 2019-12-31 | 2020-05-29 | 上海涛润医疗科技有限公司 | Face recognition method |
CN111259738A (en) * | 2020-01-08 | 2020-06-09 | 科大讯飞股份有限公司 | Face recognition model construction method, face recognition method and related device |
CN111325094A (en) * | 2020-01-16 | 2020-06-23 | 中国人民解放军海军航空大学 | High-resolution range profile-based ship type identification method and system |
CN111488933A (en) * | 2020-04-13 | 2020-08-04 | 上海联影智能医疗科技有限公司 | Image classification method, network, computer device and storage medium |
CN111582008A (en) * | 2019-02-19 | 2020-08-25 | 富士通株式会社 | Device and method for training classification model and device for classification by using classification model |
CN111582009A (en) * | 2019-02-19 | 2020-08-25 | 富士通株式会社 | Device and method for training classification model and device for classification by using classification model |
CN111898465A (en) * | 2020-07-08 | 2020-11-06 | 北京捷通华声科技股份有限公司 | Method and device for acquiring face recognition model |
CN113239876A (en) * | 2021-06-01 | 2021-08-10 | 平安科技(深圳)有限公司 | Large-angle face recognition model training method |
CN113610071A (en) * | 2021-10-11 | 2021-11-05 | 深圳市一心视觉科技有限公司 | Face living body detection method and device, electronic equipment and storage medium |
WO2022188697A1 (en) * | 2021-03-08 | 2022-09-15 | 腾讯科技(深圳)有限公司 | Biological feature extraction method and apparatus, device, medium, and program product |
CN116453201A (en) * | 2023-06-19 | 2023-07-18 | 南昌大学 | Face recognition method and system based on adjacent edge loss |
CN117274266A (en) * | 2023-11-22 | 2023-12-22 | 深圳市宗匠科技有限公司 | Method, device, equipment and storage medium for grading acne severity |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104850825A (en) * | 2015-04-18 | 2015-08-19 | 中国计量学院 | Facial image face score calculating method based on convolutional neural network |
CN106548165A (en) * | 2016-11-28 | 2017-03-29 | 中通服公众信息产业股份有限公司 | A kind of face identification method of the convolutional neural networks weighted based on image block |
CN106599830A (en) * | 2016-12-09 | 2017-04-26 | 中国科学院自动化研究所 | Method and apparatus for positioning face key points |
CN106709418A (en) * | 2016-11-18 | 2017-05-24 | 北京智慧眼科技股份有限公司 | Face identification method based on scene photo and identification photo and identification apparatus thereof |
CN107103281A (en) * | 2017-03-10 | 2017-08-29 | 中山大学 | Face identification method based on aggregation Damage degree metric learning |
CN107330383A (en) * | 2017-06-18 | 2017-11-07 | 天津大学 | A kind of face identification method based on depth convolutional neural networks |
CN107506717A (en) * | 2017-08-17 | 2017-12-22 | 南京东方网信网络科技有限公司 | Without the face identification method based on depth conversion study in constraint scene |
CN107766850A (en) * | 2017-11-30 | 2018-03-06 | 电子科技大学 | Based on the face identification method for combining face character information |
CN107832700A (en) * | 2017-11-03 | 2018-03-23 | 全悉科技(北京)有限公司 | A kind of face identification method and system |
-
2018
- 2018-06-01 CN CN201810557864.1A patent/CN109033938A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104850825A (en) * | 2015-04-18 | 2015-08-19 | 中国计量学院 | Facial image face score calculating method based on convolutional neural network |
CN106709418A (en) * | 2016-11-18 | 2017-05-24 | 北京智慧眼科技股份有限公司 | Face identification method based on scene photo and identification photo and identification apparatus thereof |
CN106548165A (en) * | 2016-11-28 | 2017-03-29 | 中通服公众信息产业股份有限公司 | A kind of face identification method of the convolutional neural networks weighted based on image block |
CN106599830A (en) * | 2016-12-09 | 2017-04-26 | 中国科学院自动化研究所 | Method and apparatus for positioning face key points |
CN107103281A (en) * | 2017-03-10 | 2017-08-29 | 中山大学 | Face identification method based on aggregation Damage degree metric learning |
CN107330383A (en) * | 2017-06-18 | 2017-11-07 | 天津大学 | A kind of face identification method based on depth convolutional neural networks |
CN107506717A (en) * | 2017-08-17 | 2017-12-22 | 南京东方网信网络科技有限公司 | Without the face identification method based on depth conversion study in constraint scene |
CN107832700A (en) * | 2017-11-03 | 2018-03-23 | 全悉科技(北京)有限公司 | A kind of face identification method and system |
CN107766850A (en) * | 2017-11-30 | 2018-03-06 | 电子科技大学 | Based on the face identification method for combining face character information |
Non-Patent Citations (1)
Title |
---|
WEIYANG LIU ETC.: "SphereFace: Deep Hypersphere Embedding for Face Recognition", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 * |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109816001A (en) * | 2019-01-10 | 2019-05-28 | 高新兴科技集团股份有限公司 | A kind of more attribute recognition approaches of vehicle based on deep learning, device and equipment |
CN109801636A (en) * | 2019-01-29 | 2019-05-24 | 北京猎户星空科技有限公司 | Training method, device, electronic equipment and the storage medium of Application on Voiceprint Recognition model |
CN111582008A (en) * | 2019-02-19 | 2020-08-25 | 富士通株式会社 | Device and method for training classification model and device for classification by using classification model |
CN111582009B (en) * | 2019-02-19 | 2023-09-15 | 富士通株式会社 | Device and method for training classification model and device for classifying by using classification model |
CN111582008B (en) * | 2019-02-19 | 2023-09-08 | 富士通株式会社 | Device and method for training classification model and device for classifying by using classification model |
CN111582009A (en) * | 2019-02-19 | 2020-08-25 | 富士通株式会社 | Device and method for training classification model and device for classification by using classification model |
CN109902757B (en) * | 2019-03-08 | 2023-04-25 | 山东领能电子科技有限公司 | Face model training method based on Center Loss improvement |
CN109902757A (en) * | 2019-03-08 | 2019-06-18 | 山东领能电子科技有限公司 | One kind being based on the improved faceform's training method of Center Loss |
CN110009013A (en) * | 2019-03-21 | 2019-07-12 | 腾讯科技(深圳)有限公司 | Encoder training and characterization information extracting method and device |
CN109934197A (en) * | 2019-03-21 | 2019-06-25 | 深圳力维智联技术有限公司 | Training method, device and the computer readable storage medium of human face recognition model |
CN110348320B (en) * | 2019-06-18 | 2021-08-17 | 武汉大学 | Face anti-counterfeiting method based on multi-loss depth fusion |
CN110348320A (en) * | 2019-06-18 | 2019-10-18 | 武汉大学 | A kind of face method for anti-counterfeit based on the fusion of more Damage degrees |
CN110705689A (en) * | 2019-09-11 | 2020-01-17 | 清华大学 | Continuous learning method and device capable of distinguishing features |
CN110705689B (en) * | 2019-09-11 | 2021-09-24 | 清华大学 | Continuous learning method and device capable of distinguishing features |
CN110569809A (en) * | 2019-09-11 | 2019-12-13 | 淄博矿业集团有限责任公司 | coal mine dynamic face recognition attendance checking method and system based on deep learning |
CN110569826B (en) * | 2019-09-18 | 2022-05-24 | 深圳市捷顺科技实业股份有限公司 | Face recognition method, device, equipment and medium |
CN110569826A (en) * | 2019-09-18 | 2019-12-13 | 深圳市捷顺科技实业股份有限公司 | Face recognition method, device, equipment and medium |
CN110765933A (en) * | 2019-10-22 | 2020-02-07 | 山西省信息产业技术研究院有限公司 | Dynamic portrait sensing comparison method applied to driver identity authentication system |
CN110929099A (en) * | 2019-11-28 | 2020-03-27 | 杭州趣维科技有限公司 | Short video frame semantic extraction method and system based on multitask learning |
CN111177469A (en) * | 2019-12-20 | 2020-05-19 | 国久大数据有限公司 | Face retrieval method and face retrieval device |
CN111126307B (en) * | 2019-12-26 | 2023-12-12 | 东南大学 | Small sample face recognition method combining sparse representation neural network |
CN111126307A (en) * | 2019-12-26 | 2020-05-08 | 东南大学 | Small sample face recognition method of joint sparse representation neural network |
CN111209839A (en) * | 2019-12-31 | 2020-05-29 | 上海涛润医疗科技有限公司 | Face recognition method |
CN111209839B (en) * | 2019-12-31 | 2023-05-23 | 上海涛润医疗科技有限公司 | Face recognition method |
CN111259738B (en) * | 2020-01-08 | 2023-10-27 | 科大讯飞股份有限公司 | Face recognition model construction method, face recognition method and related device |
CN111259738A (en) * | 2020-01-08 | 2020-06-09 | 科大讯飞股份有限公司 | Face recognition model construction method, face recognition method and related device |
CN111325094A (en) * | 2020-01-16 | 2020-06-23 | 中国人民解放军海军航空大学 | High-resolution range profile-based ship type identification method and system |
CN111488933B (en) * | 2020-04-13 | 2024-02-27 | 上海联影智能医疗科技有限公司 | Image classification method, network, computer device, and storage medium |
CN111488933A (en) * | 2020-04-13 | 2020-08-04 | 上海联影智能医疗科技有限公司 | Image classification method, network, computer device and storage medium |
CN111898465A (en) * | 2020-07-08 | 2020-11-06 | 北京捷通华声科技股份有限公司 | Method and device for acquiring face recognition model |
WO2022188697A1 (en) * | 2021-03-08 | 2022-09-15 | 腾讯科技(深圳)有限公司 | Biological feature extraction method and apparatus, device, medium, and program product |
CN113239876A (en) * | 2021-06-01 | 2021-08-10 | 平安科技(深圳)有限公司 | Large-angle face recognition model training method |
CN113239876B (en) * | 2021-06-01 | 2023-06-02 | 平安科技(深圳)有限公司 | Training method for large-angle face recognition model |
CN113610071A (en) * | 2021-10-11 | 2021-11-05 | 深圳市一心视觉科技有限公司 | Face living body detection method and device, electronic equipment and storage medium |
CN113610071B (en) * | 2021-10-11 | 2021-12-24 | 深圳市一心视觉科技有限公司 | Face living body detection method and device, electronic equipment and storage medium |
CN116453201B (en) * | 2023-06-19 | 2023-09-01 | 南昌大学 | Face recognition method and system based on adjacent edge loss |
CN116453201A (en) * | 2023-06-19 | 2023-07-18 | 南昌大学 | Face recognition method and system based on adjacent edge loss |
CN117274266A (en) * | 2023-11-22 | 2023-12-22 | 深圳市宗匠科技有限公司 | Method, device, equipment and storage medium for grading acne severity |
CN117274266B (en) * | 2023-11-22 | 2024-03-12 | 深圳市宗匠科技有限公司 | Method, device, equipment and storage medium for grading acne severity |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109033938A (en) | A kind of face identification method based on ga s safety degree Fusion Features | |
CN107766850B (en) | Face recognition method based on combination of face attribute information | |
CN108537136B (en) | Pedestrian re-identification method based on attitude normalization image generation | |
CN107563279B (en) | Model training method for adaptive weight adjustment aiming at human body attribute classification | |
CN112418095B (en) | Facial expression recognition method and system combined with attention mechanism | |
CN109359541A (en) | A kind of sketch face identification method based on depth migration study | |
CN109409297B (en) | Identity recognition method based on dual-channel convolutional neural network | |
CN105138998B (en) | Pedestrian based on the adaptive sub-space learning algorithm in visual angle recognition methods and system again | |
CN108427921A (en) | A kind of face identification method based on convolutional neural networks | |
CN112446423B (en) | Fast hybrid high-order attention domain confrontation network method based on transfer learning | |
CN107463920A (en) | A kind of face identification method for eliminating partial occlusion thing and influenceing | |
CN107194341A (en) | The many convolution neural network fusion face identification methods of Maxout and system | |
CN109583322A (en) | A kind of recognition of face depth network training method and system | |
CN108921051A (en) | Pedestrian's Attribute Recognition network and technology based on Recognition with Recurrent Neural Network attention model | |
CN110532920A (en) | Smallest number data set face identification method based on FaceNet method | |
CN109190561B (en) | Face recognition method and system in video playing | |
CN112016464A (en) | Method and device for detecting face shielding, electronic equipment and storage medium | |
CN110781829A (en) | Light-weight deep learning intelligent business hall face recognition method | |
CN109255289B (en) | Cross-aging face recognition method based on unified generation model | |
CN108960184A (en) | A kind of recognition methods again of the pedestrian based on heterogeneous components deep neural network | |
CN110348331A (en) | Face identification method and electronic equipment | |
CN107871107A (en) | Face authentication method and device | |
CN108108760A (en) | A kind of fast human face recognition | |
CN109377429A (en) | A kind of recognition of face quality-oriented education wisdom evaluation system | |
CN112200176B (en) | Method and system for detecting quality of face image and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181218 |