CN110427888A - A kind of face method for evaluating quality based on feature clustering - Google Patents

A kind of face method for evaluating quality based on feature clustering Download PDF

Info

Publication number
CN110427888A
CN110427888A CN201910715538.3A CN201910715538A CN110427888A CN 110427888 A CN110427888 A CN 110427888A CN 201910715538 A CN201910715538 A CN 201910715538A CN 110427888 A CN110427888 A CN 110427888A
Authority
CN
China
Prior art keywords
face
data
feature
training
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910715538.3A
Other languages
Chinese (zh)
Inventor
袁培江
田波
王轶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenxing Technology Co Ltd
Original Assignee
Beijing Shenxing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shenxing Technology Co Ltd filed Critical Beijing Shenxing Technology Co Ltd
Priority to CN201910715538.3A priority Critical patent/CN110427888A/en
Publication of CN110427888A publication Critical patent/CN110427888A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to technical field of computer vision, and disclose a kind of face method for evaluating quality based on feature clustering, including the following steps: A, data preparation: training data of the present invention uses CMU Multi-PIE face database, it includes the different angle of different faces id that other, which can also be used, illumination, the human face photo of clarity;Multi-PIE data comprise more than the 750000+ face Various Seasonal of 337id, different angle, the human face photo of different illumination;B, it data prediction: before data are sent into training, needs to pre-process data, pretreated purpose is first is that in order to enrich sample number amount and type.The quality evaluation of face of the present invention is applied to the screening before face alignment, is clustered with face characteristic, and the data shape of face alignment needs is more in line with;Meanwhile this method is converted into trained label using the similarity of face characteristic sequence of calculation face characteristic, carries out neural metwork training, process is simple, and speed is fast, can satisfy requirement of real-time without being manually labeled.

Description

A kind of face method for evaluating quality based on feature clustering
Technical field
The present invention relates to technical field of computer vision, specially a kind of face quality evaluation side based on feature clustering Method.
Background technique
With the development of computer technology, the fields such as machine vision, neural network are had made great progress.Face among these Identification technology achieves very ten-strike, has been widely used in bank, mobile phone, shopping, campus, subway, cell, public place Etc. in each living scene.A set of face system includes face snap module, face quality assessment modules, face in monitoring scene Comparison module and result output module, often due to scene is complicated, lead to the facial image matter captured in video monitoring scene Measure it is irregular, since face obscures, blocks, causes the accuracy rate of recognition of face to be tested due to posture etc., thus in order to The precision for improving identification generallys use a kind of method and judges the quality of facial image, the face that quality is met the requirements System is given into matching identification, the false recognition rate of recognition of face can be effectively reduced in this way.
Existing face method for evaluating quality is roughly divided into two classes: one kind is based on conventional method, and one kind is based on depth The method of study;Based on traditional method usually by the size of face, gradient such as blocks at the quality that conditions judge face: another Outer one kind is the development with deep learning in recent years, and someone starts to apply deep learning in the assessment of face quality; Hand-designed feature is needed based on traditional method, and is verified repeatedly for a large amount of data, is combined by various judgements Achieve the purpose that face quality judging, the accuracy rate of the judgement of face quality is influenced by local module judgement;Based on depth The face quality judging method of study needs a large amount of data mark, and the index of face quality annotation is numerous, between all multi objectives Boundary is fuzzy, and artificial mark shows great difficulty.
Therefore, it is proposed that a kind of face method for evaluating quality based on feature clustering.
Summary of the invention
The present invention provides a kind of face method for evaluating quality based on feature clustering, solves and mentions in above-mentioned background technique Out the problem of.
To achieve the above object, the invention provides the following technical scheme: a kind of face quality evaluation based on feature clustering Method, including the following steps:
A, data preparation: training data of the present invention uses CMU Multi-PIE face database, it is possible to use other Different angle comprising different faces id, illumination, the human face photo of clarity;Multi-PIE data comprise more than 337id's 750000+ face Various Seasonal, different angle, the human face photo of different illumination;
B, it data prediction: before data are sent into training, needs to pre-process data, pretreated purpose one It is in order to enrich sample number amount and type, second is that in order to avoid over-fitting;
C, feature extraction: face characteristic extraction algorithm is using the face recognition to increase income, it is possible to use other people Face knows method for distinguishing, and the certificate photo (everyone includes 3-5 of Various Seasonal) of 337id people in data set Multi-PIE uses Face recognition algorithm extracts face characteristic and forms gallery, and gallery data are extracted feature and located in advance without image Module is managed, image preprocessing is passed through for other data, by the image zooming-out face characteristic after pretreatment;
D, training label generates: it will currently be extracted face characteristic and the id certificate photo feature calculation similarity, it is similar The distance between two feature of degree=1-, obtains the similarity between two faces, which is divided into 10 classes, as the face figure The label 1 of picture, similarity value are the label 2 of the picture;
E, face quality model training: the data after pretreatment and two labels generated are fed together shown in Fig. 3 Depth network, in Fig. 3 after the last one convolution module 1, network is divided into Liang Ge branch, and a branch is used to return face A convolution module 2 is passed through by score value, the branch, then passes through a global average Chi Huahou, activates by a sigmoid Output and label 2 are sent into Euclidean distance Loss by layer output;Another branch is used to cluster feature, also passes through One convolution module 2 is exported using a softmax active coating later by a full articulamentum, the output and label 1 It is sent into cross entropy Loss;
F, face quality evaluation: during face quality evaluation, we obtain two values that deep neural network provides, First is that the mass fraction of network judgement, another is quality category probability;Final face quality is by the combination of the two come really Fixed, score=a*b* classification correspondence score value+(1-a*b) * face mass fraction of face, wherein a is coefficient, and value range exists [0,1], b be the face picture the corresponding probability of quality category maximum probability classification, classification correspond to score value be 10 classes correspondence [0, 1]。
Preferably, the data preprocessing method used in the step B mainly has brightness change, noise, Gaussian Blur, fortune Dynamic model paste tone, is deviated, is blocked, rotating.
Preferably, the calculation of similarity uses COS distance in the D step.
The present invention have it is following the utility model has the advantages that
The quality evaluation of face of the present invention is applied to the screening before face alignment, is clustered with face characteristic, is more accorded with Close the data shape that face alignment needs;Meanwhile this method is not necessarily to manually be labeled, using face characteristic sequence of calculation face The similarity of feature is converted into trained label, carries out neural metwork training, process is simple, and speed is fast, can satisfy real-time and wants It asks.
Detailed description of the invention
Fig. 1 is recognition of face flow chart in the prior art;
Fig. 2 is face Evaluation Model on Quality of the present invention training flow chart;
Fig. 3 is inventive network structure chart;
Fig. 4 is the flow chart of convolution module 1 of the present invention;
Fig. 5 is the flow chart of convolution module 2 of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Fig. 1-5 is please referred to, the present invention provides a kind of technical solution: a kind of face quality evaluation side based on feature clustering Method, including the following steps:
A, data preparation: training data of the present invention uses CMU Multi-PIE face database, it is possible to use other Different angle comprising different faces id, illumination, the human face photo of clarity;Multi-PIE data comprise more than 337id's 750000+ face Various Seasonal, different angle, the human face photo of different illumination;
B, it data prediction: before data are sent into training, needs to pre-process data, pretreated purpose one It is in order to enrich sample number amount and type, second is that in order to avoid over-fitting;
C, feature extraction: face characteristic extraction algorithm is using the face recognition to increase income, it is possible to use other people Face knows method for distinguishing, and the certificate photo (everyone includes 3-5 of Various Seasonal) of 337id people in data set Multi-PIE uses Face recognition algorithm extracts face characteristic and forms gallery, and gallery data are extracted feature and located in advance without image Module is managed, image preprocessing is passed through for other data, by the image zooming-out face characteristic after pretreatment;
D, training label generates: it will currently be extracted face characteristic and the id certificate photo feature calculation similarity, it is similar The distance between two feature of degree=1-, obtains the similarity between two faces, which is divided into 10 classes, as the face figure The label 1 of picture, similarity value are the label 2 of the picture;
E, face quality model training: the data after pretreatment and two labels generated are fed together shown in Fig. 3 Depth network, in Fig. 3 after the last one convolution module 1, network is divided into Liang Ge branch, and a branch is used to return face A convolution module 2 is passed through by score value, the branch, then passes through a global average Chi Huahou, activates by a sigmoid Output and label 2 are sent into Euclidean distance Loss by layer output;Another branch is used to cluster feature, also passes through One convolution module 2 is exported using a softmax active coating later by a full articulamentum, the output and label 1 It is sent into cross entropy Loss;
F, face quality evaluation: during face quality evaluation, we obtain two values that deep neural network provides, First is that the mass fraction of network judgement, another is quality category probability;Final face quality is by the combination of the two come really Fixed, score=a*b* classification correspondence score value+(1-a*b) * face mass fraction of face, wherein a is coefficient, and value range exists [0,1], b be the face picture the corresponding probability of quality category maximum probability classification, classification correspond to score value be 10 classes correspondence [0, 1]。
Further, the data preprocessing method used in the step B mainly have brightness change, noise, Gaussian Blur, Motion blur tone, is deviated, is blocked, rotating.
Further, the calculation of similarity uses COS distance in the D step.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.
It although an embodiment of the present invention has been shown and described, for the ordinary skill in the art, can be with A variety of variations, modification, replacement can be carried out to these embodiments without departing from the principles and spirit of the present invention by understanding And modification, the scope of the present invention is defined by the appended.

Claims (3)

1. a kind of face method for evaluating quality based on feature clustering, which is characterized in that including the following steps:
A, data preparation: training data of the present invention uses CMU Multi-PIE face database, it is possible to use other include The different angle of different faces id, illumination, the human face photo of clarity;Multi-PIE data comprise more than the 750000 of 337id + face Various Seasonal, different angle, the human face photo of different illumination;
B, it data prediction: before data are sent into training, needs to pre-process data, pretreated purpose is first is that be Abundant sample number amount and type, second is that in order to avoid over-fitting;
C, feature extraction: face characteristic extraction algorithm is using the face recognition to increase income, it is possible to use other faces are known The certificate photo (everyone includes 3-5 of Various Seasonal) of 337id people in data set Multi-PIE is used face by method for distinguishing Recognition algorithm extracts face characteristic and forms gallery, and gallery data extract feature without image preprocessing mould Block passes through image preprocessing for other data, by the image zooming-out face characteristic after pretreatment;
D, training label generates: it will currently be extracted face characteristic and the id certificate photo feature calculation similarity, similarity= The distance between two feature of 1-, obtains the similarity between two faces, which is divided into 10 classes, as the facial image Label 1, similarity value are the label 2 of the picture;
E, the data after pretreatment and two labels generated face quality model training: are fed together depth shown in Fig. 3 Network is spent, in Fig. 3 after the last one convolution module 1, network is divided into Liang Ge branch, and a branch is used to return point of face A convolution module 2 is passed through by value, the branch, then passes through a global average Chi Huahou, by a sigmoid active coating Output and label 2 are sent into Euclidean distance Loss by output;Another branch is used to cluster feature, also passes through one A convolution module 2 is exported using a softmax active coating later by a full articulamentum, which send with label 1 Enter cross entropy Loss;
F, face quality evaluation: during face quality evaluation, we obtain two values that deep neural network provides, first is that The mass fraction of network judgement, another is quality category probability;Finally face quality is determined by the combination of the two, people The score of face=a*b* classification correspondence score value+(1-a*b) * face mass fraction, wherein a be coefficient, value range [0, 1], b is the corresponding probability of quality category maximum probability classification of the face picture, and it is that 10 classes are corresponding [0,1] that classification, which corresponds to score value,.
2. a kind of face method for evaluating quality based on feature clustering according to claim 1, it is characterised in that: the B The data preprocessing method used in step mainly has brightness change, noise, Gaussian Blur, motion blur, tone, offset, screening Gear, rotation etc..
3. a kind of face method for evaluating quality based on feature clustering according to claim 1, it is characterised in that: the D The calculation of similarity uses COS distance in step.
CN201910715538.3A 2019-08-05 2019-08-05 A kind of face method for evaluating quality based on feature clustering Pending CN110427888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910715538.3A CN110427888A (en) 2019-08-05 2019-08-05 A kind of face method for evaluating quality based on feature clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910715538.3A CN110427888A (en) 2019-08-05 2019-08-05 A kind of face method for evaluating quality based on feature clustering

Publications (1)

Publication Number Publication Date
CN110427888A true CN110427888A (en) 2019-11-08

Family

ID=68412520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910715538.3A Pending CN110427888A (en) 2019-08-05 2019-08-05 A kind of face method for evaluating quality based on feature clustering

Country Status (1)

Country Link
CN (1) CN110427888A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401344A (en) * 2020-06-04 2020-07-10 腾讯科技(深圳)有限公司 Face recognition method and device and training method and device of face recognition system
CN111696090A (en) * 2020-06-08 2020-09-22 电子科技大学 Method for evaluating quality of face image in unconstrained environment
CN112215822A (en) * 2020-10-13 2021-01-12 北京中电兴发科技有限公司 Face image quality evaluation method based on lightweight regression network
CN112948612A (en) * 2021-03-16 2021-06-11 杭州海康威视数字技术股份有限公司 Human body cover generation method and device, electronic equipment and storage medium
CN114155589A (en) * 2021-11-30 2022-03-08 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN115512427A (en) * 2022-11-04 2022-12-23 北京城建设计发展集团股份有限公司 User face registration method and system combined with matched biopsy
CN118197609A (en) * 2024-05-17 2024-06-14 大连百首企家科技有限公司 Anesthesia and analgesia effect evaluation method based on facial expression analysis

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140044348A1 (en) * 2011-02-18 2014-02-13 National Ict Australia Limited Image quality assessment
CN104408736A (en) * 2014-12-12 2015-03-11 西安电子科技大学 Characteristic-similarity-based synthetic face image quality evaluation method
US20160086015A1 (en) * 2007-01-09 2016-03-24 Si Corporation Method and system for automated face detection and recognition
CN106897748A (en) * 2017-03-02 2017-06-27 上海极链网络科技有限公司 Face method for evaluating quality and system based on deep layer convolutional neural networks
CN107133601A (en) * 2017-05-13 2017-09-05 五邑大学 A kind of pedestrian's recognition methods again that network image super-resolution technique is resisted based on production
CN107273510A (en) * 2017-06-20 2017-10-20 广东欧珀移动通信有限公司 Photo recommends method and Related product
CN107292813A (en) * 2017-05-17 2017-10-24 浙江大学 A kind of multi-pose Face generation method based on generation confrontation network
CN107609493A (en) * 2017-08-25 2018-01-19 广州视源电子科技股份有限公司 Method and device for optimizing human face image quality evaluation model
CN107704806A (en) * 2017-09-01 2018-02-16 深圳市唯特视科技有限公司 A kind of method that quality of human face image prediction is carried out based on depth convolutional neural networks
CN107832802A (en) * 2017-11-23 2018-03-23 北京智芯原动科技有限公司 Quality of human face image evaluation method and device based on face alignment
CN108765407A (en) * 2018-05-31 2018-11-06 上海依图网络科技有限公司 A kind of portrait picture quality determination method and device
CN108960087A (en) * 2018-06-20 2018-12-07 中国科学院重庆绿色智能技术研究院 A kind of quality of human face image appraisal procedure and system based on various dimensions evaluation criteria
CN109117797A (en) * 2018-08-17 2019-01-01 浙江捷尚视觉科技股份有限公司 A kind of face snapshot recognition method based on face quality evaluation
CN109544523A (en) * 2018-11-14 2019-03-29 北京智芯原动科技有限公司 Quality of human face image evaluation method and device based on more attribute face alignments

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160086015A1 (en) * 2007-01-09 2016-03-24 Si Corporation Method and system for automated face detection and recognition
US20140044348A1 (en) * 2011-02-18 2014-02-13 National Ict Australia Limited Image quality assessment
CN104408736A (en) * 2014-12-12 2015-03-11 西安电子科技大学 Characteristic-similarity-based synthetic face image quality evaluation method
CN106897748A (en) * 2017-03-02 2017-06-27 上海极链网络科技有限公司 Face method for evaluating quality and system based on deep layer convolutional neural networks
CN107133601A (en) * 2017-05-13 2017-09-05 五邑大学 A kind of pedestrian's recognition methods again that network image super-resolution technique is resisted based on production
CN107292813A (en) * 2017-05-17 2017-10-24 浙江大学 A kind of multi-pose Face generation method based on generation confrontation network
CN107273510A (en) * 2017-06-20 2017-10-20 广东欧珀移动通信有限公司 Photo recommends method and Related product
CN107609493A (en) * 2017-08-25 2018-01-19 广州视源电子科技股份有限公司 Method and device for optimizing human face image quality evaluation model
CN107704806A (en) * 2017-09-01 2018-02-16 深圳市唯特视科技有限公司 A kind of method that quality of human face image prediction is carried out based on depth convolutional neural networks
CN107832802A (en) * 2017-11-23 2018-03-23 北京智芯原动科技有限公司 Quality of human face image evaluation method and device based on face alignment
CN108765407A (en) * 2018-05-31 2018-11-06 上海依图网络科技有限公司 A kind of portrait picture quality determination method and device
CN108960087A (en) * 2018-06-20 2018-12-07 中国科学院重庆绿色智能技术研究院 A kind of quality of human face image appraisal procedure and system based on various dimensions evaluation criteria
CN109117797A (en) * 2018-08-17 2019-01-01 浙江捷尚视觉科技股份有限公司 A kind of face snapshot recognition method based on face quality evaluation
CN109544523A (en) * 2018-11-14 2019-03-29 北京智芯原动科技有限公司 Quality of human face image evaluation method and device based on more attribute face alignments

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
JAVIER HERNANDEZ-ORTEGA 等: "FaceQnet: Quality Assessment for Face Recognition based on Deep Learning", 《ARXIV》 *
LACEY BEST-ROWDEN 等: "Automatic Face Image Quality Prediction", 《ARXIV》 *
RALPH GROSS 等: "Multi-PIE", 《PREPRINT SUBMITTED TO IMAGE AND VISION COMPUTING》 *
VISHAL AGARWAL: "Deep Face Quality Assessment", 《ARXIV》 *
YEZHOU LI 等: "Image quality assessment using deep convolutional networks", 《AIP ADVANCES》 *
李秋珍 等: "基于卷积神经网络的人脸图像质量评价", 《计算机应用》 *
程换新 等: "基于迁移学习的人脸图像质量评估", 《电子测量技术》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401344A (en) * 2020-06-04 2020-07-10 腾讯科技(深圳)有限公司 Face recognition method and device and training method and device of face recognition system
CN111401344B (en) * 2020-06-04 2020-09-29 腾讯科技(深圳)有限公司 Face recognition method and device and training method and device of face recognition system
CN111696090A (en) * 2020-06-08 2020-09-22 电子科技大学 Method for evaluating quality of face image in unconstrained environment
CN112215822A (en) * 2020-10-13 2021-01-12 北京中电兴发科技有限公司 Face image quality evaluation method based on lightweight regression network
CN112948612A (en) * 2021-03-16 2021-06-11 杭州海康威视数字技术股份有限公司 Human body cover generation method and device, electronic equipment and storage medium
CN112948612B (en) * 2021-03-16 2024-02-06 杭州海康威视数字技术股份有限公司 Human body cover generation method and device, electronic equipment and storage medium
CN114155589A (en) * 2021-11-30 2022-03-08 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN114155589B (en) * 2021-11-30 2023-08-08 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN115512427A (en) * 2022-11-04 2022-12-23 北京城建设计发展集团股份有限公司 User face registration method and system combined with matched biopsy
CN115512427B (en) * 2022-11-04 2023-04-25 北京城建设计发展集团股份有限公司 User face registration method and system combined with matched biopsy
CN118197609A (en) * 2024-05-17 2024-06-14 大连百首企家科技有限公司 Anesthesia and analgesia effect evaluation method based on facial expression analysis

Similar Documents

Publication Publication Date Title
CN110427888A (en) A kind of face method for evaluating quality based on feature clustering
CN108520216B (en) Gait image-based identity recognition method
CN106875381A (en) A kind of phone housing defect inspection method based on deep learning
CN108537136A (en) The pedestrian's recognition methods again generated based on posture normalized image
CN107506702A (en) Human face recognition model training and test system and method based on multi-angle
CN107967695A (en) A kind of moving target detecting method based on depth light stream and morphological method
CN103295009B (en) Based on the license plate character recognition method of Stroke decomposition
CN112766218B (en) Cross-domain pedestrian re-recognition method and device based on asymmetric combined teaching network
CN110472652A (en) A small amount of sample classification method based on semanteme guidance
CN108960201A (en) A kind of expression recognition method extracted based on face key point and sparse expression is classified
CN113298018A (en) False face video detection method and device based on optical flow field and facial muscle movement
CN113205107A (en) Vehicle type recognition method based on improved high-efficiency network
CN106295532A (en) A kind of human motion recognition method in video image
CN107220598A (en) Iris Texture Classification based on deep learning feature and Fisher Vector encoding models
CN110503078A (en) A kind of remote face identification method and system based on deep learning
CN111639580A (en) Gait recognition method combining feature separation model and visual angle conversion model
CN111738178A (en) Wearing mask facial expression recognition method based on deep learning
CN114842524A (en) Face false distinguishing method based on irregular significant pixel cluster
CN116030396A (en) Accurate segmentation method for video structured extraction
CN109165542A (en) Pedestrian detection method based on simplified convolutional neural network
Tong et al. Research on face recognition method based on deep neural network
CN112488165A (en) Infrared pedestrian identification method and system based on deep learning model
CN111950452A (en) Face recognition method
CN117036412A (en) Twin network infrared pedestrian target tracking method integrating deformable convolution
CN109145744B (en) LSTM network pedestrian re-identification method based on self-adaptive prediction mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191108

RJ01 Rejection of invention patent application after publication