CN110443162A - A kind of two-part training method for disguised face identification - Google Patents

A kind of two-part training method for disguised face identification Download PDF

Info

Publication number
CN110443162A
CN110443162A CN201910654611.0A CN201910654611A CN110443162A CN 110443162 A CN110443162 A CN 110443162A CN 201910654611 A CN201910654611 A CN 201910654611A CN 110443162 A CN110443162 A CN 110443162A
Authority
CN
China
Prior art keywords
training
network
disguised
face identification
stage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910654611.0A
Other languages
Chinese (zh)
Other versions
CN110443162B (en
Inventor
吴晓富
项阳
赵师亮
张索非
颜俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201910654611.0A priority Critical patent/CN110443162B/en
Publication of CN110443162A publication Critical patent/CN110443162A/en
Application granted granted Critical
Publication of CN110443162B publication Critical patent/CN110443162B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A kind of two-part training method for disguised face identification, includes the following steps: step S1, pre-processes the data set used needed for training, obtains collection SetF、SetS;Step S2, the first stage, by SetFAs training set and use ArcFace loss function training network;Step S3, the full articulamentum of the last layer of network is cancelled;Step S4, second stage, by SetSAs training set and use ArcFace loss function training network.The work of model is applicable in domain present invention utilizes a small amount of disguised face data and has gone to disguised face identification from Generic face identification, there is good recognition effect in DFW test benchmark.

Description

A kind of two-part training method for disguised face identification
Technical field
The invention belongs to technical field of face recognition, and in particular to a kind of two-part training side for disguised face identification Method.
Background technique
Huge success is achieved in recent years by the face recognition technology of convolutional neural networks.As in research more A Xiang Chengguo outstanding has good effect using the face feature vector as obtained from network mapping to carry out recognition of face Fruit, and it is generally considered current state-of-the-art method.With advanced network structure, the data set of high quality and more The continuous proposition of exquisite loss function, the sense of the feature vector finally obtained are also increasingly stronger: the feature of different people to Difference is gradually increased between amount, otherwise difference is then reducing between the feature vector of the same person.
Although recognition of face obtain achievement rather attract attention, disguised face identification be still one it is challenging Project.It is such as being made up to face, after taking the objects such as cap, mask, the difficulty of identification will be greatly increased.And at this Body identifies in the biggish situation of difficulty that the overall quality of the data set of deep learning institute dependency degree can not be satisfactory, this is more Further improve the difficulty of project.Constantly there are such as FaceNet, SphereFace compared to Generic face identification field, The achievement of ArcFace etc high quality occurs, and disguised face identifies the achievement in field with regard to much less, and achievement newer at present is The MiRA-Face obtained using DFW disguised face data set.It has used a two stage training, first uses Generic face The training method of identification is obtaining a network, and the training set provided followed by DFW carries out feature vector using PCA Dimension-reduction treatment, obtains the information in terms of certain camouflage whereby.MiRA-Face is disadvantageous in that: (1) first stage The method that training is proposed using CosFace, this method be not best selection at present;(2) letter extracted using PCA It ceases less.The performance that the two results in the algorithm still has room for promotion.
Summary of the invention
It is a kind of for disguised face identification the technical problem to be solved by the present invention is to overcome the deficiencies of the prior art and provide Two-part training method reuse Joint loss function use by using the basic convolutional neural networks of ArcFace training In class to minimize DFW training sample away from and expand spacing in it, there is good disguised face recognition effect.
The present invention provides a kind of two-part training method for disguised face identification, includes the following steps:
Step S1, the data set used needed for training is pre-processed, obtains collection SetF、SetS
Step S2, the first stage, by SetFAs training set and use ArcFace loss function training network;
Step S3, the full articulamentum of the last layer of network is cancelled;
Step S4, second stage, by SetSAs training set and use ArcFace loss function training network.
As further technical solution of the present invention, model that the first stage uses by ResNet50IR residual error network, BatchNorm layers with Dropout layers, full articulamentum and another BatchNorm layers composition output module, full articulamentum and Softmax classify layer constitute categorization module and ArcFace loss function constitute, the ResNet50IR residual error network with it is described Output module is as the backbone network for extracting feature.
Further, ResNet50IR residual error network uses residual unit, residual error list on the basis of 50 layers of ResNet Member is 6 layers of composite construction of BatchNorm- convolution-BatchNorm-PRelu- convolution-BatchNorm, and output size is by the 5th The step-length of layer convolutional layer determines;When step-length is 1, export identical as the size of input;When step-length is 2, the size of output is input Half;ResNet50IR residual error network is made of Input and 4 convolution module, 4 convolution modules have 3 respectively, 4, 14,3 residual units, first residual unit of each convolution module are responsible for reducing the dimension of output, output module Dropout layers of parameter is 0.5, the output of full articulamentum for 512 dimensions vectors, using being obtained most after one BatchNorm layers Whole feature vector v.
Further, step feature vector V needs to make through normalized before inputting full articulamentum | | v | |=1;Full connection The dimension of layer weight is depending on the label classification number of training set, and when classification number is P, weight matrix w dimension is D*P, makes MS- When Celeb-1M data set is as training set, P 85K, D are the length of feature vector V, length 512;By the inclined of full articulamentum Each row normalization of the amount of setting b zero setting and w, then the output vector of full articulamentum bewiFor the i-th column of weight matrix W.
Further, using ArcFace loss function training network, function formula isWherein, hyper parameter s and m is respectively 64 and 0.5, θj,iIt is i-th Feature vector v caused by inputtingiWith weight vectors wjAngle, yiFor viCorresponding correct label value.
Further, the model of second stage is made of feature extraction network and Joint loss function, feature extraction network For the backbone network of first stage, Joint loss function formula is
In formula, front is divided into Triplet loss, and rear portion is divided into Pair loss;f(xi) it is after normalized The feature vector v of feature extraction network outputi, < f (x1),f(x2The vector product of) > be feature vector, i.e. vector v1With v2Angle Cosine value, parameter alpha and λ are positive value.
Further, the training set of second stage is the training set of DFW data set, needs to carry out triple before trainingPairing, first choose a Normal image conductIt chooses again and Normal is the same as under catalogue Validation Disguised conductIt finally chooses and Normal is the same as the Impersonator conduct under catalogue
Further, in DFW data set with the picture under catalogue divide Normal, Validation, Disguised and Impersonator, wherein Normal, Validation and Disguised are the same person, Impersonator and first three For different people similar in appearance.
The work of model domain is applicable in present invention utilizes a small amount of disguised face data to go to from Generic face identification Disguised face identification, has good recognition effect in DFW test benchmark.
Detailed description of the invention
Fig. 1 is training flow diagram of the invention;
Fig. 2 is backbone network structural schematic diagram of the invention;
Fig. 3 is residual unit structure chart of the invention;
Fig. 4 is different loss function comparison diagrams in the stage 2 of the invention;
Fig. 5 is DFW data set exemplary diagram;
Fig. 6 is DFW test result comparison diagram.
Specific embodiment
Referring to Fig. 1, the present embodiment overall flow is divided into two stages, network model used by the stage 1 is by following 4 Divide and constitute: (1) ResNet50IR residual error network;(2) BatchNorm layers, Dropout layers, full articulamentum and another The output module of BatchNorm layers of composition;(3) categorization module that full articulamentum and Softmax classification layer are constituted;(4)ArcFace Loss function.Wherein (1) and (2) two parts have been presented in Fig. 2 specific network as the backbone network for carrying out feature extraction Structure and the dimension individually exported.Stage 1 is made a concrete analysis of as follows:
(1) ResNet50IR uses improved residual unit shown in Fig. 3 on the basis of 50 layers of conventional ResNet.It should Residual unit has used 6 layers of composite construction of BatchNorm- convolution-BatchNorm-PRelu- convolution-BatchNorm.Entirely The output size of residual unit is controlled by the step-length of the 5th layer of convolutional layer, and when step-length is 1, output is identical as the size of input, And step-length be 2 when, the size of output be input half.
(2) ResNet50IR is made of 5 parts, and a part Input and 4 convolution modules, 4 convolution modules have respectively 3,4,14,3 residual units, and first residual unit in each module is responsible for reducing the dimension of output (by the unit In the step-length of second convolutional layer be set as 2).We illustrate every when individually input dimension is [112*112*3] in Fig. 2 The output dimension of a module.
(3) parameter of the Dropout layer in output module is set as 0.5, that is to say, that has the unit of random half at this Output meeting zero setting when layer, which increase the robustness of network.The vector that full articulamentum output is tieed up for 512, using one Final feature vector v is obtained after BatchNorm layers.
(4) it before feature vector v being inputted full articulamentum, needs that it is normalized, so that | | v | |=1.Entirely The dimension of articulamentum weight is depending on the label classification number of training set, and when classification number is PP, weight matrix W dimension is D*P (D Row P column), when using MS-Celeb-1M data set as training set, it is herein 512 that P 85k, D, which are the length of feature vector v,. By the amount of bias b zero setting of full articulamentum and each row normalization of W, it is assumed that the i-th column writing w of Wi, full articulamentum at this time The i-th bit of output vector are as follows:
v·wi=‖ v ‖ | | wi| | cos θ=cos θ (1.1)
Cos (θ) is two vector vs and wiAngle.
(5) the loss function training whole network proposed using ArcFace, formula are as follows:
Hyper parameter s and m in formula are respectively set to 64 and 0.5, θj,iRefer to feature vector caused by i-th of input viWith weight vectors wjAngle, yiIndicate viCorresponding correct label value.It is often adopted relative to shown in formula (1.3) Softmax loss function:
Formula (1.2) has carried out following improvement:
The biasing b of full articulamentum is set 0, and by feature vector v and weight vectors wiMake normalized, at this time v and wi Vector product be considered as two vectorial angle cosine values, see formula (1.1).Formula (1.2) regards angle theta asj,iFunction, it is right It seeks gradient, there is following result:
So that loss function is declined most fast direction isReduce andIncrease, it is believed that the meeting in training Allow feature vector viIt is close as far as possible to represent the other weight vectors of its tag classSeparate its remain-power for not representing such simultaneously Weight vector wj,j≠yi.Thus the feature vector with same label will be gathered in the same area with training, different The feature vector of label can be opened included angle, i.e. diminution inter- object distance, increase between class distance.
It usesInstead ofThis can further reduce the inter- object distance of feature vector.Consider InHave when smallerOne can be still kept using formula (1.2) as loss function at this time A biggish value needs to make to make loss function continue to declineIt further decreases.
To feature vector v and weight vectors wiExcept normalization, it is also provided with a hyper parameter s, uses biggish s So that training is easier to carry out, network is easier to restrain for value meeting, is traditionally arranged to be 64.In the case where not using s, only useInstead ofIn hands-on, often model is difficult to restrain.When s setting is larger, it is found that In classification error, loss function can not use s Shi Geng great relatively, force it to being correctly oriented iteration, and then when classifying correct Loss function can be relatively original small, so that training is easy convergence.
The training of first stage generally uses biggish human face data collection, such as VGG2, MS-Celeb-1M etc., the rank Section is that the feature extraction network that an effect is preferably applied to non-disguised face in order to obtain (removes last part to connect entirely Connect the network of layer).
The network model that stage 2 uses is (1) feature extraction network;(2) Joint loss function.Wherein feature extraction net Network is the backbone network obtained after the stage 1.Stage 2 is made a concrete analysis of as follows:
(1) shown in the following formula of Joint loss function:
Obviously, formula (1.5) is made of two parts, and front portion is known as Triplet loss, and rear part is known as Pair damage It loses.F (x is used in formulai,xi) indicate the feature vector v that the feature extraction network after normalized exportsi, < f (x1),f(x2The vector product of) > indicate feature vector, that is, two vector vs1With v2Included angle cosine value.Parameter alpha and λ take Positive value.When using loss function training, need to input photo finishing at three one group, i.e., in formulaBinary group thereinReferred to as positive sample pair needs identical label, andThen need Different labels, referred to as negative sample pair.Triplet damage control positive sample to the distance between be less than between negative sample pair away from From specific difference is controlled by parameter alpha, and general α takes 0.3 or so.Pair damage control positive sample to the distance between, into one Step limit in class avoided away from, its presence only use Triplet loss it is possible that the class spacing that controls but cannot Reduce class in away from the case where.It is shown in Fig. 4 and only uses Triplet loss and using the positive sample obtained after Joint loss training This angle profiles versus between, it is found that after having used Pair to lose, the angle distribution between positive sample pair is obvious More preferably.0.3 or 0.4 is generally taken for parameter lambda.
(2) training set that the stage 2 uses is the training set of DFW data set, needs to carry out triple before trainingPairing: 1) choose a Normal image conduct2) it chooses and Normal is the same as under catalogue Validation Disguised conduct3) it chooses and Normal is the same as the Impersonator conduct under catalogue (note: be divided into 4 kinds with the picture under catalogue in DFW data set: Normal, Validation, Disguised and Impersonator, Normal, Validation and Disguised are the same person, and Impersonator and first three are length Different people similar in phase, example are shown in Fig. 5)
The present invention has obtained the feature extraction network for being directed to Generic face by the training in stage 1, the stage 2 then For the universal lesser situation of disguised face data set now, matching for triple is carried out using the thought of Triplet loss function To training, and Triplet is compensated for using Pair loss and loses existing insufficient, complete the scope of application of network to camouflage The migration of face.Two kinds of processing are joined together such that the present invention has biggish promotion in result.
Model of the invention is (in the enterprising training of row order section 1 of MS-Celeb-1M data set, the enterprising row order section 2 of DFW training set Training) it carries out and tests, FAR GAR in the case where 1% and 0.1% in the enterprising row of disguised face identification test set that DFW is provided It is respectively as follows: 1) protocol-1:97.98% and 60.23%;2) protocol-2:90.37% and 82.84%;3) Protocol-3:90.4% and 81.18%.Below it is the explanation of GAR and FAR:
DFW test data set provides the facial image of a batch in pairs, understands some in these pairs of images Centering two images be same people, these are to as positive sample.Other picture is to then adhering to different people separately.Measure two figures What it is as similarity degree is the distance between image feature vector, but only one distance is obviously that can not judge this two now Whether image is then the same person.Method relatively common at present is to add a threshold again as thresholding, when distance is less than Positive sample is just considered when this threshold value, otherwise is negative sample.
When giving a threshold.TP, TN, FP can be calculated, the value of FN:
TP: the positive sample quantity correctly identified by algorithm;
TN: the negative sample quantity correctly identified by algorithm;
FP: it is identified as the negative sample number of positive sample;
FN: it is identified as the positive sample quantity of negative sample
It is worth the value of available GAR and FAR followed by these:
Fig. 6 is model of the present invention compared with other models, and all in all, performance of the invention is better than existing majority Algorithm.(note: DFW data set provides this three groups different positive negative samples of protocol-1, protocol-2 and protocol-3 Right, wherein protocol-3 is the synthesis of preceding two groups of samples pair.)
The basic principles, main features and advantages of the invention have been shown and described above.Those skilled in the art should Understand, the present invention do not limited by above-mentioned specific embodiment, the description in above-mentioned specific embodiment and specification be intended merely into One step illustrates the principle of the present invention, and under the premise of not departing from spirit of that invention range, the present invention also has various change and changes Into these changes and improvements all fall within the protetion scope of the claimed invention.The scope of protection of present invention is by claim Book and its equivalent thereof.

Claims (8)

1. a kind of two-part training method for disguised face identification, which is characterized in that include the following steps,
Step S1, the data set used needed for training is pre-processed, obtains collection SetF、SetS
Step S2, the first stage, by SetFAs training set and use ArcFace loss function training network;
Step S3, the full articulamentum of the last layer of network is cancelled;
Step S4, second stage, by SetSAs training set and use ArcFace loss function training network.
2. a kind of two-part training method for disguised face identification according to claim 1, which is characterized in that described First stage use model by ResNet50IR residual error network, BatchNorm layer with Dropout layers, full articulamentum and separately The categorization module and ArcFace that the output module, full articulamentum and Softmax classification layer of one BatchNorm layers of composition are constituted Loss function is constituted, and the ResNet50IR residual error network and the output module are as the backbone network for extracting feature.
3. a kind of two-part training method for disguised face identification according to claim 2, which is characterized in that ResNet50IR residual error network uses residual unit on the basis of 50 layers of ResNet, and residual unit is BatchNorm- convolution- 6 layers of composite construction of BatchNorm-PRelu- convolution-BatchNorm, output size are determined by the step-length of the 5th layer of convolutional layer; When step-length is 1, export identical as the size of input;When step-length is 2, the size of output is the half of input;ResNet50IR is residual Poor network is made of Input and 4 convolution module, and 4 convolution modules have 3,4,14,3 residual units, Mei Gejuan respectively First residual unit of volume module is responsible for reducing the dimension of output, and the parameter of the Dropout layer of output module is 0.5, Quan Lian Layer output is connect for the vectors of 512 dimensions, using obtaining final feature vector v after one BatchNorm layers.
4. a kind of two-part training method for disguised face identification according to claim 3, which is characterized in that feature Vector V needs to make through normalized before inputting full articulamentum | | v | |=1;The dimension of full articulamentum weight is according to the mark of training set Depending on signing classification number, when classification number is P, weight matrix w dimension is D*P, makes MS-Celeb-1M data set as training set When, P 85K, D are the length of feature vector V, length 512;By the amount of bias b zero setting of full articulamentum and each column normalizing of w Change, then the output vector of full articulamentum iswiFor weight matrix W I-th column.
5. a kind of two-part training method for disguised face identification according to claim 1 or 2, which is characterized in that Using ArcFace loss function training network, function formula is Wherein, hyper parameter s and m is respectively 64 and 0.5, θj,iGenerated feature vector v is inputted for i-thiWith weight vectors wjFolder Angle, yiFor viCorresponding correct label value.
6. a kind of two-part training method for disguised face identification according to claim 1, which is characterized in that second The model in stage is made of feature extraction network and Joint loss function, and feature extraction network is the backbone network of first stage, Joint loss function formula is
In formula, front is divided into Triplet loss, and rear portion is divided into Pair loss;f(xi) it is feature after normalized Extract the feature vector v of network outputi, < f (x1),f(x2The vector product of) > be feature vector, i.e. vector v1With v2Included angle cosine Value, parameter alpha and λ are positive value.
7. a kind of two-part training method for disguised face identification according to claim 1, which is characterized in that described The training set of second stage is the training set of DFW data set, needs to carry out triple before trainingPairing, first Choose a Normal image conductIt chooses again and Normal is the same as Validation the Disguised conduct under catalogueIt finally chooses and Normal is the same as the Impersonator conduct under catalogue
8. a kind of two-part training method for disguised face identification according to claim 7, which is characterized in that DFW Divide Normal, Validation, Disguised and Impersonator with the picture under catalogue in data set, wherein Normal, Validation and Disguised are the same person, and Impersonator is different similar in appearance from first three People.
CN201910654611.0A 2019-07-19 2019-07-19 Two-stage training method for disguised face recognition Active CN110443162B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910654611.0A CN110443162B (en) 2019-07-19 2019-07-19 Two-stage training method for disguised face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910654611.0A CN110443162B (en) 2019-07-19 2019-07-19 Two-stage training method for disguised face recognition

Publications (2)

Publication Number Publication Date
CN110443162A true CN110443162A (en) 2019-11-12
CN110443162B CN110443162B (en) 2022-08-30

Family

ID=68430896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910654611.0A Active CN110443162B (en) 2019-07-19 2019-07-19 Two-stage training method for disguised face recognition

Country Status (1)

Country Link
CN (1) CN110443162B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401193A (en) * 2020-03-10 2020-07-10 海尔优家智能科技(北京)有限公司 Method and device for obtaining expression recognition model and expression recognition method and device
CN111860266A (en) * 2020-07-13 2020-10-30 南京理工大学 Disguised face recognition method based on depth features
CN112101192A (en) * 2020-09-11 2020-12-18 中国平安人寿保险股份有限公司 Artificial intelligence-based camouflage detection method, device, equipment and medium
CN113205058A (en) * 2021-05-18 2021-08-03 中国科学院计算技术研究所厦门数据智能研究院 Face recognition method for preventing non-living attack
CN113361346A (en) * 2021-05-25 2021-09-07 天津大学 Scale parameter self-adaptive face recognition method for replacing adjustment parameters
CN113780461A (en) * 2021-09-23 2021-12-10 中国人民解放军国防科技大学 Robust neural network training method based on feature matching

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117744A (en) * 2018-07-20 2019-01-01 杭州电子科技大学 A kind of twin neural network training method for face verification
CN109214360A (en) * 2018-10-15 2019-01-15 北京亮亮视野科技有限公司 A kind of construction method of the human face recognition model based on ParaSoftMax loss function and application
CN109359541A (en) * 2018-09-17 2019-02-19 南京邮电大学 A kind of sketch face identification method based on depth migration study
CN109815801A (en) * 2018-12-18 2019-05-28 北京英索科技发展有限公司 Face identification method and device based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117744A (en) * 2018-07-20 2019-01-01 杭州电子科技大学 A kind of twin neural network training method for face verification
CN109359541A (en) * 2018-09-17 2019-02-19 南京邮电大学 A kind of sketch face identification method based on depth migration study
CN109214360A (en) * 2018-10-15 2019-01-15 北京亮亮视野科技有限公司 A kind of construction method of the human face recognition model based on ParaSoftMax loss function and application
CN109815801A (en) * 2018-12-18 2019-05-28 北京英索科技发展有限公司 Face identification method and device based on deep learning

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401193A (en) * 2020-03-10 2020-07-10 海尔优家智能科技(北京)有限公司 Method and device for obtaining expression recognition model and expression recognition method and device
CN111401193B (en) * 2020-03-10 2023-11-28 海尔优家智能科技(北京)有限公司 Method and device for acquiring expression recognition model, and expression recognition method and device
CN111860266A (en) * 2020-07-13 2020-10-30 南京理工大学 Disguised face recognition method based on depth features
CN111860266B (en) * 2020-07-13 2022-09-30 南京理工大学 Disguised face recognition method based on depth features
CN112101192A (en) * 2020-09-11 2020-12-18 中国平安人寿保险股份有限公司 Artificial intelligence-based camouflage detection method, device, equipment and medium
CN113205058A (en) * 2021-05-18 2021-08-03 中国科学院计算技术研究所厦门数据智能研究院 Face recognition method for preventing non-living attack
CN113361346A (en) * 2021-05-25 2021-09-07 天津大学 Scale parameter self-adaptive face recognition method for replacing adjustment parameters
CN113361346B (en) * 2021-05-25 2022-12-23 天津大学 Scale parameter self-adaptive face recognition method for replacing adjustment parameters
CN113780461A (en) * 2021-09-23 2021-12-10 中国人民解放军国防科技大学 Robust neural network training method based on feature matching
CN113780461B (en) * 2021-09-23 2022-08-05 中国人民解放军国防科技大学 Robust neural network training method based on feature matching

Also Published As

Publication number Publication date
CN110443162B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN110443162A (en) A kind of two-part training method for disguised face identification
CN108596039B (en) Bimodal emotion recognition method and system based on 3D convolutional neural network
Wen et al. Latent factor guided convolutional neural networks for age-invariant face recognition
CN109063565B (en) Low-resolution face recognition method and device
CN103984948B (en) A kind of soft double-deck age estimation method based on facial image fusion feature
CN109359541A (en) A kind of sketch face identification method based on depth migration study
CN109325443A (en) A kind of face character recognition methods based on the study of more example multi-tag depth migrations
Xia et al. Toward kinship verification using visual attributes
CN107609572A (en) Multi-modal emotion identification method, system based on neutral net and transfer learning
Li et al. Deep cost-sensitive and order-preserving feature learning for cross-population age estimation
CN105205449A (en) Sign language recognition method based on deep learning
Zhang et al. Picking neural activations for fine-grained recognition
CN110555060A (en) Transfer learning method based on paired sample matching
CN107545536A (en) The image processing method and image processing system of a kind of intelligent terminal
CN110059593B (en) Facial expression recognition method based on feedback convolutional neural network
CN108154156B (en) Image set classification method and device based on neural topic model
CN103617609B (en) Based on k-means non-linearity manifold cluster and the representative point choosing method of graph theory
CN109117795B (en) Neural network expression recognition method based on graph structure
CN110414587A (en) Depth convolutional neural networks training method and system based on progressive learning
CN111881716A (en) Pedestrian re-identification method based on multi-view-angle generation countermeasure network
CN110516533A (en) A kind of pedestrian based on depth measure discrimination method again
CN106960185A (en) The Pose-varied face recognition method of linear discriminant depth belief network
CN111401116B (en) Bimodal emotion recognition method based on enhanced convolution and space-time LSTM network
CN103714340A (en) Self-adaptation feature extracting method based on image partitioning
CN105893941A (en) Facial expression identifying method based on regional images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant