CN108319943A - A method of human face recognition model performance under the conditions of raising is worn glasses - Google Patents

A method of human face recognition model performance under the conditions of raising is worn glasses Download PDF

Info

Publication number
CN108319943A
CN108319943A CN201810377373.9A CN201810377373A CN108319943A CN 108319943 A CN108319943 A CN 108319943A CN 201810377373 A CN201810377373 A CN 201810377373A CN 108319943 A CN108319943 A CN 108319943A
Authority
CN
China
Prior art keywords
glasses
human face
recognition model
face
face recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810377373.9A
Other languages
Chinese (zh)
Other versions
CN108319943B (en
Inventor
李继凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Polytron Technologies Inc Xingang Polytron Technologies Inc
Original Assignee
Beijing Polytron Technologies Inc Xingang Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Polytron Technologies Inc Xingang Polytron Technologies Inc filed Critical Beijing Polytron Technologies Inc Xingang Polytron Technologies Inc
Priority to CN201810377373.9A priority Critical patent/CN108319943B/en
Publication of CN108319943A publication Critical patent/CN108319943A/en
Application granted granted Critical
Publication of CN108319943B publication Critical patent/CN108319943B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Geometry (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

A method of human face recognition model performance under the conditions of raising is worn glasses includes the following steps:For the face training data of existing non-wearing spectacles, add glasses algorithm automatically by facial image, is extended for face training data of wearing glasses;It is trained using the face training data of wearing glasses after expansion, obtains human face recognition model.The present invention increases glasses by the facial image to non-wearing spectacles, quickly and easily the training data for having non-wearing spectacles can be expanded as training data of wearing glasses, improve the scale and diversity of training data, face training data of this method relative to newly-built wearing spectacles, it is of low cost, simple and efficient, with obvious effects, a large amount of manpower and financial resources cost can be saved.Meanwhile human face recognition model is trained by the training sample after expansion, it can make human face recognition model that there is better anti-interference ability and recognition effect to face of wearing glasses, greatly improve whole recognition accuracy.

Description

A method of human face recognition model performance under the conditions of raising is worn glasses
Technical field
The invention belongs to computer vision fields, are related to a kind of human face detection and recognition method, are carried more particularly to one kind The method of human face recognition model performance under the conditions of height is worn glasses.
Background technology
To improve the recognition effect of face recognition algorithms, it usually needs carry out training using a large amount of training datas, can just obtain The human face recognition model that must be had excellent performance.Under the premise of model structure is identical, the scale and diversity of training data will be to moulds The final performance of type exerts a decisive influence.In practical applications, face recognition technology often is faced with problems with:First, existing Some training datas are mostly the training data of non-wearing spectacles, the human face recognition model obtained by the training data, to wearing The recognition of face effect of glasses is poor;Meanwhile it if newly-built large-scale face training data of wearing glasses, will not only consume a large amount of Manpower financial capacity, and the time cycle is also longer;Second is that the phenomenon that as a kind of generally existing, user is carrying out recognition of face Non- wearing spectacles when registration, and while detecting has worn glasses, in this case, the human face recognition model that when registration establishes are logical The same face of non-wearing spectacles and wearing spectacles is associated by Chang Wufa, therefore recognition of face effect can also become very Difference.
Invention content
It is an object of the invention to overcome the deficiencies of existing technologies, provide it is a kind of it is of low cost, step is succinct, can be notable The method for improving human face recognition model performance under the conditions of wearing glasses.
To achieve the above object, present invention employs following technical solutions:
A method of human face recognition model performance under the conditions of raising is worn glasses includes the following steps:(1) it is directed to and has Non- wearing spectacles face training data, add glasses algorithm automatically by facial image, be extended for wearing glasses face training number According to;(2) it is trained using the face training data of wearing glasses after expansion, obtains human face recognition model.
Further, facial image described in step (1) adds glasses algorithm to include the following steps automatically:(1.1) it utilizes and is based on The Face datection algorithm of concatenated convolutional network, the face location and face key point of facial image in locating human face's training data Position;(1.2) face position estimation human face posture angle is utilized;(1.3) angle is carried out to glasses material image using human face posture angle Degree transformation;(1.4) by after transformation glasses material image and facial image carry out Pixel-level local weighted sum, obtain hyperphoria with fixed eyeballs The facial image of mirror.
Further, by the glasses material image and facial image progress Pixel-level local after transformation in the step (1.4) When weighted sum, glasses material image position is disturbed in the vertical direction, is obtained multiple with different eye positions It wears glasses facial image.
Further, it is weighted summation by using multiple and different weights in the step (1.4), obtains multiple tools There is the facial image of wearing glasses of different eyeglass reflecting effects.
Further, the step (2) includes the following steps:(2.1) the face training data of wearing glasses after expansion is carried out Sort out calibration;(2.2) human face recognition model based on residual error depth convolutional network, the spy for extracting image deeper are established Sign;(2.3) the Classification Loss function for establishing the softmax functions based on enhancing class interval, the classification for evaluating network miss Difference;(2.4) optimization method for utilizing error back propagation and stochastic gradient descent, optimizes Classification Loss function; (2.5) it is calculated by successive ignition, Classification Loss function declines and restrains, and obtains human face recognition model.
Further, the method for sorting out calibration described in step (2.1) is:The facial image label of same people is consistent, And it is different from other facial image labels.
Further, classified cost function described in step (2.3) is:
Wherein, n is training sample sum, and the L2 norms that s is characterized, m is shift term, yiFor the classification of sample,For The L2 norms of angle between feature vector x and network weight vector W, weight vectors W are normalized to 1,To return Feature vector after one change, the length of n.
A kind of face identification method, the method for human face recognition model performance obtains under the conditions of being worn glasses using above-mentioned raising Human face recognition model, feature calculation is carried out to the facial image that identifies of needs, and carry out similarity with known face characteristic and comment Valence;Judge that similarity is maximum and is higher than the known face of threshold value, as recognition result;If the similarity of all known faces Respectively less than threshold value then judges that the face is strange face.
The method of human face recognition model performance under the conditions of a kind of raising of the present invention is worn glasses, by non-wearing spectacles Facial image increases glasses, can quickly and easily expand the training data for having non-wearing spectacles as trained number of wearing glasses According to improving the scale and diversity of training data, face training data of this method relative to newly-built wearing spectacles, cost It is cheap, simple and efficient, with obvious effects, a large amount of manpower and financial resources cost can be saved.Meanwhile passing through the training after expansion Sample is trained human face recognition model, and the human face recognition model of depth convolutional network training gained can be made to wearing glasses Face has better anti-interference ability and recognition effect, greatly improves whole recognition accuracy.
Description of the drawings
Fig. 1 be in embodiment it is a kind of raising wear glasses under the conditions of human face recognition model performance method flow chart;
Fig. 2 is first network structure of embodiment cascade convolutional network;
Fig. 3 is the 4th network structure of embodiment cascade convolutional network;
Fig. 4 is the network structure of residual unit in embodiment.
Specific implementation mode
Below in conjunction with attached drawing 1 to 4, human face recognition model under the conditions of a kind of raising is worn glasses is further illustrated the present invention The specific implementation mode of the method for energy.The method of human face recognition model performance is unlimited under the conditions of a kind of raising of the present invention is worn glasses In the following description.
As shown in Figure 1, it is a kind of raising wear glasses under the conditions of human face recognition model performance method, mainly include following two A step:
(1) it is directed to the face training data of existing non-wearing spectacles, adds glasses algorithm automatically by facial image, is Each of training set face data add glasses one by one, to which the training data is extended for face training data of wearing glasses;
(2) it is trained using the face training data of wearing glasses after expansion, obtains human face recognition model.
Wherein, the basic handling thinking of step (1) is:First with concatenated convolutional network to existing face database Critical point detection is carried out, the angle of inclination of the position acquisition face of eyes is passed through;Then identical to the progress of glasses material image to incline The affine transformation at angle, then to after transformation glasses image and facial image carry out respective pixel weighted sum, adjustment weights obtain Take the facial image of wearing glasses of different reflecting effects;To finally the facial image after glasses and original face database combining be added, The facial image of same person assigns identical label, and the label of different people is different.The specific implementation mode of step (1) is as follows:
(1.1) the Face datection algorithm based on concatenated convolutional network is utilized, facial image in locating human face's training data Face location and face key point position.The concatenated convolutional network that this step uses includes four convolutional neural networks, first three Network is connected in series using basic operations layers such as traditional convolutional layer, pond layers.
By taking first convolutional neural networks as an example, structure is as shown in Fig. 2, wherein data indicates input picture, conv tables Show that convolution operation, PRelu expressions intensify function operation, pool indicates that pondization operation, prob represent the confidence level of output. Conv4-2 layers of output target coordinate position, prob1 export the confidence level that target is face, and activation primitive PRelu is:
Wherein, xiIt is the input of activation primitive, αiIt is positive coefficient.
4th convolutional neural networks carry out five on the basis of the face location that third convolutional neural networks export The location estimation of key point, structure is as shown in figure 3, wherein data indicates that whole picture facial image, slice_data are indicated to defeated Enter data and carry out 5 tunnel fractionations, conv indicates that convolutional layer, PRelu expression parameter Relu activation primitive layers, pool indicate pond Layer, fc indicate that full convolutional layer, concat expressions are attached 5 circuit-switched datas.Slicer_dara is operated by Slice by whole picture Face is divided into five local datas according to the desired locations that five key points of average face face domain, each circuit-switched data by convolution, Local feature is extracted in the operations such as pond.Local shape factor finishes, and is linked together by five tunnel features of concat Ceng Jiang, so Afterwards by interrelated between full articulamentum further five local features of excavation.Finally, in conjunction with local feature and the overall situation The position of five key points of feature assessment.
(1.2) face position estimation human face posture angle is utilized.Assuming that the coordinate of left eye key point is (x1,y1), right eye closes The coordinate of key point is (x2,y2), then face inclination angle theta in the horizontal direction is:
(1.3) angular transformation is carried out to glasses material image using human face posture angle.Assuming that original image pixels coordinate is (x, y), respective pixel coordinate is (x ', y ') in image after corresponding angular transformation, then, the two meets following relationship:
(1.4) by after transformation glasses material image and facial image carry out Pixel-level local weighted sum, obtain hyperphoria with fixed eyeballs The facial image of mirror.It converts the weights of weighted sum and slight disturbance is carried out in the vertical direction to the position of glasses, have Conducive to the diversity of increase sample;The weights of adjustment lens area can obtain the reflecting effect of varying strength, be conducive to increase Anti-interference ability of the model to illumination effect.
The basic handling thinking of step (2) is:First to the human face data that has marked carry out illumination variation, left and right mirror image, The data enhancement operations such as coloration variation;It is then fed into the residual error net that loss function is the softmax loss functions for increasing class spacing It is trained in network;It is final to obtain human face recognition model.The specific implementation mode of step (2) is as follows:
(2.1) classification calibration is carried out to the face training data of wearing glasses after expansion.Wherein, the facial image of same people Label is consistent, and different from other facial image labels.
(2.2) human face recognition model based on residual error depth convolutional network is established.Convolution can be made using residual error network Network increases the network number of plies under the premise of there is not gradient disappearance, to be conducive to extract the feature of image deeper.
The network structure of the residual unit is as shown in figure 4, wherein conv indicates that convolutional layer, relu indicate activation primitive Layer, res indicate residual error layer.Input two paths of data is carried out respective pixel and asks poor operation by res layers, and its object is to deepen net Network and introduce more parameters while avoids gradient from disappearing, to promote the performance of network.
(2.3) the classification cost function for establishing the softmax functions based on enhancing class interval, for evaluating network Error in classification.Increase class interval to be beneficial to improve the distinction of different face characteristics.The classification cost function is:
Wherein, n is training sample sum, and the L2 norms that s is characterized, m is shift term, to increase class spacing, yiFor sample This classification,The angle being characterized between vector sum network weight vector,For the feature vector after normalization, Length is n.
(2.4) optimization method for utilizing error back propagation and stochastic gradient descent, optimizes object function.
(2.5) it is calculated by successive ignition, loss function declines and restrains, that is, obtains to recognition of face performance of wearing glasses The human face recognition model optimized.
The human face recognition model obtained using the above method, judges facial image for some known face or stranger The method of face is:Feature calculation is carried out to the facial image that needs identify first, and similar to known face characteristic progress cosine Degree evaluation;Then judge facial image for some known face or strange face.Specifically determination method is:Judge similarity Known face maximum and higher than threshold value, as recognition result;If the similarity of all known faces is respectively less than threshold value, Judge that the face is strange face.Face figure is identified in the human face recognition model obtained using the above method, identifies Human face recognition model with obvious effects better than using conventional method acquisition.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, cannot recognize The specific implementation of the fixed present invention is confined to these explanations.For those of ordinary skill in the art to which the present invention belongs, Without departing from the inventive concept of the premise, a number of simple deductions or replacements can also be made, all shall be regarded as belonging to the present invention Protection domain.

Claims (8)

1. a kind of method of human face recognition model performance under the conditions of raising is worn glasses, it is characterised in that:Include the following steps:
(1) it is directed to the face training data of existing non-wearing spectacles, adds glasses algorithm automatically by facial image, is extended for wearing Glasses face training data;
(2) it is trained using the face training data of wearing glasses after expansion, obtains human face recognition model.
The method of human face recognition model performance under the conditions of 2. raising according to claim 1 is worn glasses, it is characterised in that:Step Suddenly facial image described in (1) adds glasses algorithm to include the following steps automatically:
(1.1) the Face datection algorithm based on concatenated convolutional network is utilized, the face of facial image in locating human face's training data Position and face key point position;
(1.2) face position estimation human face posture angle is utilized;
(1.3) angular transformation is carried out to glasses material image using human face posture angle;
(1.4) by after transformation glasses material image and facial image carry out Pixel-level local weighted sum, obtain and wear glasses Facial image.
The method of human face recognition model performance under the conditions of 3. raising according to claim 2 is worn glasses, it is characterised in that:Institute State in step (1.4) by after transformation glasses material image and facial image carry out Pixel-level local weighted sum when, by glasses Material image position is disturbed in the vertical direction, obtains multiple facial images of wearing glasses with different eye positions.
The method of human face recognition model performance under the conditions of 4. raising according to claim 3 is worn glasses, it is characterised in that:Institute It states in step (1.4) and is weighted summation by using multiple and different weights, obtain multiple reflective with different eyeglass The facial image of wearing glasses of effect.
The method of human face recognition model performance under the conditions of 5. raising according to claim 4 is worn glasses, it is characterised in that:Institute Step (2) is stated to include the following steps:
(2.1) classification calibration is carried out to the face training data of wearing glasses after expansion;
(2.2) human face recognition model based on residual error depth convolutional network, the feature for extracting image deeper are established;
(2.3) the Classification Loss function for establishing the softmax functions based on enhancing class interval, the classification for evaluating network miss Difference;
(2.4) optimization method for utilizing error back propagation and stochastic gradient descent, optimizes Classification Loss function;
(2.5) it is calculated by successive ignition, Classification Loss function declines and restrains, and obtains human face recognition model.
The method of human face recognition model performance under the conditions of 6. raising according to claim 5 is worn glasses, it is characterised in that:Step Suddenly the method for sorting out calibration described in (2.1) is:The facial image label of same people is consistent, and with other facial image marks Number difference.
The method of human face recognition model performance under the conditions of 7. raising according to claim 6 is worn glasses, it is characterised in that:Step Suddenly classified cost function described in (2.3) is:
Wherein, n is training sample sum, and the L2 norms that s is characterized, m is shift term, yiFor the classification of sample,Be characterized to The angle between x and network weight vector W is measured, the L2 norms of weight vectors W are normalized to 1,After normalization Feature vector, the length of n.
8. a kind of face identification method, it is characterised in that:It is obtained using any claim the method in claim 1 to 7 Human face recognition model, feature calculation is carried out to the facial image that identifies of needs, and carry out similarity with known face characteristic and comment Valence;
Judge that similarity is maximum and is higher than the known face of threshold value, as recognition result;
If the similarity of all known faces is respectively less than threshold value, judge that the face is strange face.
CN201810377373.9A 2018-04-25 2018-04-25 Method for improving face recognition model performance under wearing condition Active CN108319943B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810377373.9A CN108319943B (en) 2018-04-25 2018-04-25 Method for improving face recognition model performance under wearing condition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810377373.9A CN108319943B (en) 2018-04-25 2018-04-25 Method for improving face recognition model performance under wearing condition

Publications (2)

Publication Number Publication Date
CN108319943A true CN108319943A (en) 2018-07-24
CN108319943B CN108319943B (en) 2021-10-12

Family

ID=62895232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810377373.9A Active CN108319943B (en) 2018-04-25 2018-04-25 Method for improving face recognition model performance under wearing condition

Country Status (1)

Country Link
CN (1) CN108319943B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969139A (en) * 2019-12-11 2020-04-07 深圳市捷顺科技实业股份有限公司 Face recognition model training method and related device, face recognition method and related device
CN111008569A (en) * 2019-11-08 2020-04-14 浙江工业大学 Glasses detection method based on face semantic feature constraint convolutional network
CN111062328A (en) * 2019-12-18 2020-04-24 中新智擎科技有限公司 Image processing method and device and intelligent robot
CN111339810A (en) * 2019-04-25 2020-06-26 南京特沃斯高科技有限公司 Low-resolution large-angle face recognition method based on Gaussian distribution
EP3699813A1 (en) * 2019-02-19 2020-08-26 Fujitsu Limited Apparatus and method for training classification model and apparatus for classifying with classification model
CN111639545A (en) * 2020-05-08 2020-09-08 浙江大华技术股份有限公司 Face recognition method, device, equipment and medium
CN111695431A (en) * 2020-05-19 2020-09-22 深圳禾思众成科技有限公司 Face recognition method, face recognition device, terminal equipment and storage medium
CN111723755A (en) * 2020-07-19 2020-09-29 南京甄视智能科技有限公司 Optimization method and system of face recognition base
CN112819758A (en) * 2021-01-19 2021-05-18 武汉精测电子集团股份有限公司 Training data set generation method and device
CN113435226A (en) * 2020-03-23 2021-09-24 北京百度网讯科技有限公司 Information processing method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004040502A1 (en) * 2002-10-31 2004-05-13 Korea Institute Of Science And Technology Image processing method for removing glasses from color facial images
US7227976B1 (en) * 2002-07-08 2007-06-05 Videomining Corporation Method and system for real-time facial image enhancement
CN102332095A (en) * 2011-10-28 2012-01-25 中国科学院计算技术研究所 Face motion tracking method, face motion tracking system and method for enhancing reality
CN103400119A (en) * 2013-07-31 2013-11-20 南京融图创斯信息科技有限公司 Face recognition technology-based mixed reality spectacle interactive display method
CN104408402A (en) * 2014-10-29 2015-03-11 小米科技有限责任公司 Face identification method and apparatus
CN104809638A (en) * 2015-05-20 2015-07-29 成都通甲优博科技有限责任公司 Virtual glasses trying method and system based on mobile terminal
CN105184253A (en) * 2015-09-01 2015-12-23 北京旷视科技有限公司 Face identification method and face identification system
CN105354792A (en) * 2015-10-27 2016-02-24 深圳市朗形网络科技有限公司 Method for trying virtual glasses and mobile terminal
CN105825490A (en) * 2016-03-16 2016-08-03 北京小米移动软件有限公司 Gaussian blur method and device of image
CN107203752A (en) * 2017-05-25 2017-09-26 四川云图睿视科技有限公司 A kind of combined depth study and the face identification method of the norm constraint of feature two

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7227976B1 (en) * 2002-07-08 2007-06-05 Videomining Corporation Method and system for real-time facial image enhancement
WO2004040502A1 (en) * 2002-10-31 2004-05-13 Korea Institute Of Science And Technology Image processing method for removing glasses from color facial images
CN102332095A (en) * 2011-10-28 2012-01-25 中国科学院计算技术研究所 Face motion tracking method, face motion tracking system and method for enhancing reality
CN103400119A (en) * 2013-07-31 2013-11-20 南京融图创斯信息科技有限公司 Face recognition technology-based mixed reality spectacle interactive display method
CN104408402A (en) * 2014-10-29 2015-03-11 小米科技有限责任公司 Face identification method and apparatus
CN104809638A (en) * 2015-05-20 2015-07-29 成都通甲优博科技有限责任公司 Virtual glasses trying method and system based on mobile terminal
CN105184253A (en) * 2015-09-01 2015-12-23 北京旷视科技有限公司 Face identification method and face identification system
CN105354792A (en) * 2015-10-27 2016-02-24 深圳市朗形网络科技有限公司 Method for trying virtual glasses and mobile terminal
CN105825490A (en) * 2016-03-16 2016-08-03 北京小米移动软件有限公司 Gaussian blur method and device of image
CN107203752A (en) * 2017-05-25 2017-09-26 四川云图睿视科技有限公司 A kind of combined depth study and the face identification method of the norm constraint of feature two

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FENG WANG 等: "Additive Margin Softmax for Face Verification", 《IEEE SIGNAL PROCESSING LETTERS》 *
YI SUN等: "Deep Convolutional Network Cascade for Facial Point Detection", 《2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
宋彩芳 等: "基于二维有监督测地线判别投影的戴眼镜人脸识别", 《黑龙江大学自然科学学报》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3699813A1 (en) * 2019-02-19 2020-08-26 Fujitsu Limited Apparatus and method for training classification model and apparatus for classifying with classification model
US11113513B2 (en) 2019-02-19 2021-09-07 Fujitsu Limited Apparatus and method for training classification model and apparatus for classifying with classification model
CN111339810A (en) * 2019-04-25 2020-06-26 南京特沃斯高科技有限公司 Low-resolution large-angle face recognition method based on Gaussian distribution
CN111008569A (en) * 2019-11-08 2020-04-14 浙江工业大学 Glasses detection method based on face semantic feature constraint convolutional network
CN110969139A (en) * 2019-12-11 2020-04-07 深圳市捷顺科技实业股份有限公司 Face recognition model training method and related device, face recognition method and related device
CN111062328A (en) * 2019-12-18 2020-04-24 中新智擎科技有限公司 Image processing method and device and intelligent robot
CN111062328B (en) * 2019-12-18 2023-10-03 中新智擎科技有限公司 Image processing method and device and intelligent robot
CN113435226A (en) * 2020-03-23 2021-09-24 北京百度网讯科技有限公司 Information processing method and device
CN113435226B (en) * 2020-03-23 2022-09-16 北京百度网讯科技有限公司 Information processing method and device
CN111639545A (en) * 2020-05-08 2020-09-08 浙江大华技术股份有限公司 Face recognition method, device, equipment and medium
CN111639545B (en) * 2020-05-08 2023-08-08 浙江大华技术股份有限公司 Face recognition method, device, equipment and medium
CN111695431A (en) * 2020-05-19 2020-09-22 深圳禾思众成科技有限公司 Face recognition method, face recognition device, terminal equipment and storage medium
CN111723755A (en) * 2020-07-19 2020-09-29 南京甄视智能科技有限公司 Optimization method and system of face recognition base
CN112819758A (en) * 2021-01-19 2021-05-18 武汉精测电子集团股份有限公司 Training data set generation method and device

Also Published As

Publication number Publication date
CN108319943B (en) 2021-10-12

Similar Documents

Publication Publication Date Title
CN108319943A (en) A method of human face recognition model performance under the conditions of raising is worn glasses
CN109800648A (en) Face datection recognition methods and device based on the correction of face key point
CN109165566A (en) A kind of recognition of face convolutional neural networks training method based on novel loss function
CN101558431B (en) Face authentication device
CN107103281A (en) Face identification method based on aggregation Damage degree metric learning
CN101388080B (en) Passerby gender classification method based on multi-angle information fusion
CN108154159B (en) A kind of method for tracking target with automatic recovery ability based on Multistage Detector
CN107292339A (en) The unmanned plane low altitude remote sensing image high score Geomorphological Classification method of feature based fusion
CN104850825A (en) Facial image face score calculating method based on convolutional neural network
CN104077605A (en) Pedestrian search and recognition method based on color topological structure
CN108932479A (en) A kind of human body anomaly detection method
CN110298297A (en) Flame identification method and device
CN106407911A (en) Image-based eyeglass recognition method and device
CN102096823A (en) Face detection method based on Gaussian model and minimum mean-square deviation
CN110288555A (en) A kind of low-light (level) Enhancement Method based on improved capsule network
CN101814144A (en) Water-free bridge target identification method in remote sensing image
CN106778474A (en) 3D human body recognition methods and equipment
CN111611874A (en) Face mask wearing detection method based on ResNet and Canny
CN109558825A (en) A kind of pupil center's localization method based on digital video image processing
CN108537143B (en) A kind of face identification method and system based on key area aspect ratio pair
CN105138974B (en) A kind of multi-modal Feature fusion of finger based on Gabor coding
CN109190475A (en) A kind of recognition of face network and pedestrian identify network cooperating training method again
CN106599785A (en) Method and device for building human body 3D feature identity information database
CN106611158A (en) Method and equipment for obtaining human body 3D characteristic information
CN112669343A (en) Zhuang minority nationality clothing segmentation method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant