WO2016145940A1 - Procédé et dispositif d'authentification faciale - Google Patents

Procédé et dispositif d'authentification faciale Download PDF

Info

Publication number
WO2016145940A1
WO2016145940A1 PCT/CN2016/070828 CN2016070828W WO2016145940A1 WO 2016145940 A1 WO2016145940 A1 WO 2016145940A1 CN 2016070828 W CN2016070828 W CN 2016070828W WO 2016145940 A1 WO2016145940 A1 WO 2016145940A1
Authority
WO
WIPO (PCT)
Prior art keywords
local
face
vector
face image
image sample
Prior art date
Application number
PCT/CN2016/070828
Other languages
English (en)
Chinese (zh)
Inventor
郇淑雯
朱和贵
张祥德
Original Assignee
北京天诚盛业科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京天诚盛业科技有限公司 filed Critical 北京天诚盛业科技有限公司
Publication of WO2016145940A1 publication Critical patent/WO2016145940A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the present invention relates to the field of image processing and pattern recognition, and in particular to a face authentication method and apparatus.
  • identity authentication technology As people pay more attention to the security and concealment of information, identity authentication technology has become more and more important.
  • Traditional identity authentication methods such as ID cards and passwords, are cumbersome to use. The biggest disadvantage is that it cannot distinguish between owners and imposters.
  • Biometrics are inherent in human beings and are unique and difficult to replicate. Therefore, the study of biometrics is of great significance.
  • the advantages of face recognition include: no need for user cooperation; non-contact type features; simple device implementation; and traceability. Because face recognition has the above advantages, this technology has a wide range of application scenarios. In social life, e-commerce, financial institutions, household registration management, etc. all have important application value.
  • Face authentication is a form of recognition. By effectively characterizing the face, the characteristics of the two face images are obtained, and the classification algorithm is used to determine whether the two photos are the same person.
  • the face authentication process is mainly divided into three parts: face detection, feature extraction and authentication. Since the face is a three-dimensional deformation model, and the face authentication is based on the photo taken by the camera imaging model, the result of the authentication is easily affected by external factors such as illumination, posture, expression, and occlusion.
  • face recognition technology involves many interdisciplinary subjects such as pattern recognition, statistical learning, machine vision, applied mathematics and information science, and its wide application prospects have received more and more attention.
  • the more mature face authentication technology methods in the prior art include: a face authentication method based on geometric features, a face authentication method based on overall features, and a method based on local features. These three methods are easily affected by factors such as light, posture, and expression, and the authentication accuracy is low.
  • the technical problem to be solved by the present invention is to provide a face authentication method and apparatus with strong anti-interference and high accuracy.
  • the present invention provides the following technical solutions:
  • a face authentication method includes:
  • the calculated local gray feature vector and the local HOG vector it is calculated whether the face image sample pair belongs to the same person.
  • a face authentication device includes:
  • Acquisition module used to acquire a pair of face image samples
  • Blocking module used to block the acquired face image sample pairs
  • a calculation module configured to calculate a local gray feature vector and a local direction gradient histogram HOG vector for the obtained block;
  • the judging module is configured to calculate, according to the calculated local gray feature vector and the local HOG vector, whether the pair of face image samples belong to the same person.
  • the face authentication method of the present invention first acquires a pair of face image samples; then, the acquired face image sample pairs are segmented, and the image segmentation process can be used to implement local information of the face image. Acquisition reduces the computational difficulty of image data processing; then calculates the local gray feature vector and the local direction gradient histogram HOG vector for the obtained block, since the gray feature of the image is the main parameter of image feature recognition, and the face The local features are stable, and the HOG operates on the local square cells of the image. It maintains good invariance to the geometric and optical deformation of the image, which makes the method effectively avoid illumination.
  • the calculated face map is calculated based on the calculated local gray feature vector and local HOG vector.
  • the face authentication method of the invention effectively avoids interference of factors such as illumination, expression and age, and also significantly improves the accuracy of face authentication.
  • FIG. 1 is a schematic flowchart 1 of a face authentication method according to the present invention.
  • FIG. 2 is a schematic flow chart of image grayscale difference feature extraction according to the present invention.
  • FIG. 3 is a schematic flow chart of face authentication performed by the softmax regression model of the present invention.
  • FIG. 4 is a schematic flowchart 2 of a face authentication method according to the present invention.
  • FIG. 5 is a schematic structural diagram 1 of a face authentication device according to the present invention.
  • FIG. 6 is a second schematic structural diagram of a face authentication device according to the present invention.
  • the present invention provides a face authentication method, as shown in FIG. 1, comprising:
  • Step S101 acquiring a pair of face image samples
  • Face image sample pair In the process of face authentication, the face image detected in real time is compared with the face image in the existing database for 1:1 comparison. This process is called authentication, but in contrast.
  • the two face images processed in the process ie containing a real-time image and a library image) are called a face sample pair.
  • the image sample pair may be composed of a face image collected in real time and a face image stored in a database, or may be composed of two face images stored in a database, or even two images.
  • the face image composed in real time is composed.
  • Step S102 Blocking the acquired face image sample pairs
  • the block processing method of the face image may adopt various methods known to those skilled in the art, such as a discrete non-coinciding blocking method, a window sliding traversal blocking method, and the like.
  • Step S103 Calculating a local gray level feature vector and a local direction gradient histogram HOG vector for the obtained block;
  • the calculation of the local gray feature vector and the direction gradient histogram HOG vector can effectively avoid the influence of illumination, expression, age and other factors on the result, and improve the accuracy of face authentication.
  • Step S104 Calculate, according to the calculated local gray feature vector and the local HOG vector, whether the pair of face image samples belong to the same person.
  • the method first acquires a pair of face image samples; then, the obtained face image sample pairs are segmented, and the image segmentation processing can be used to collect local information of the face image, thereby reducing image data processing.
  • Computational difficulty then calculate the local gray feature vector and the local direction gradient histogram HOG vector for the obtained block, since the gray feature of the image is the main parameter of the image feature recognition, and the local feature of the face has stability, and at the same time HOG operates on the local grid unit of the image, which can maintain the invariance of the geometric and optical deformation of the image, so that the method can effectively avoid the interference of illumination, expression and age.
  • the accuracy of the face authentication result is effectively improved; finally, according to the calculated local gray feature vector and the local HOG vector, it is calculated whether the face image sample pair belongs to the same person.
  • the face authentication method of the invention effectively avoids interference of factors such as illumination, expression and age, and also significantly improves the accuracy of face authentication.
  • step S102 may further be: performing multi-scale, overlapping partitioning on the acquired pairs of face image samples.
  • the multi-scale blocking strategy is used to extract local information at different scales, thereby supplementing the information loss caused by the single-scale features.
  • the method also uses an overlapping blocking strategy to segment the face image. This step further avoids the influence of lighting, expression and other factors on the authentication result, and also effectively enhances the accuracy of the system for face authentication.
  • the method can adopt different multi-scale and overlapping block strategies for different images.
  • the step size can be set to 5 at 10*10 scale; the step size can be set to 10 at 20*20 scale; 30*30 scale
  • the step size can be set to 15.
  • step S103 may further be: calculating the local gray vector cosine distance as the gray feature vector for the obtained block, and calculating the local HOG vector cosine distance as the HOG vector.
  • an integrated vector is formed by combining the calculated local gray vector cosine distance and the local HOG vector cosine distance.
  • the calculation process of the local gray vector cosine distance and the local HOG vector cosine distance adopted for the face images of different scales is the same.
  • the same form of blocking strategy is performed on the same scale to obtain the same number of small images.
  • the image sub-matrices a', b' in the lower right dashed frame of the 4*4 scale of the same position are extracted for the image 1 and the image 2, respectively:
  • Image block in the upper left frame of image 1 is
  • the HOG in the present invention is a feature defined for statistical information of gradient direction and intensity in a rectangular region of an image, and can well represent the edge or gradient structure of the target in the local region, thereby characterizing the shape of the target.
  • the HOG specific calculation process can be as follows:
  • A(x, y) represents the gray value of the image at the pixel point (x, y).
  • G x (x, y) and G y (x, y) represent the magnitudes of the vertical and horizontal gradients of the image at the (x, y) point.
  • the gradient size at the (x, y) point is defined as:
  • the gradient direction at the (x, y) point is defined as:
  • the gradient direction of [0, ⁇ ] is divided into nine bins, and the amplitude of each pixel is defined as:
  • B k represents the kth eigenvalue in the region of interest
  • B is the gradient histogram vector in the region of interest, which can indicate the gradient structure in the region of interest.
  • the partitioning method extracts the difference features of the local gradient histogram, that is, respectively calculates the gradient histogram vectors of the corresponding blocks of the pair of face samples to be authenticated, and then calculates the cosine distance of the two as the similarity measure of the image gradient distribution.
  • a method of combining local grayscale features and local HOG features is proposed to calculate the pair of face image samples, which can not only solve the influence of expression, age, illumination and other factors on the authentication result to some extent.
  • the face authentication process can be accurately implemented.
  • the method is superior to the prior art in the complexity of the time and space of the algorithm.
  • step S104 may further be: processing the calculated local gray feature vector and the local HOG vector, using an artificial neural network classification model of softmax regression, and calculating whether the face image sample pair is the same. personal.
  • the calculated local gray feature vector and local HOG vector may also be implemented by SVM algorithm, naive Bayes algorithm, Gaussian process based classification algorithm, adaboost algorithm or k-nearest neighbor algorithm (KNN). The processing is performed to calculate whether the pair of face image samples belong to the same person.
  • the artificial neural network algorithm has better training and learning ability, and the extracted face feature vector is used as the input of the artificial neural network, and then the training of the network parameters is performed to obtain the face recognition classifier, thereby completing Certification of faces.
  • the softmax model in the case of two classifications is used.
  • the specific algorithm of the softmax regression model can be referred to as follows:
  • Softmax regression is a method for multi-classification problems. To avoid generality, we set the value of the label (that is, the expected output) y to ⁇ 1...K ⁇ (K>2).
  • Figure 3 is a neural network representation of the softmax regression model, where 1 is the gray difference vector at the first scale, 2 is the gray difference vector at the second scale, and 3 is the HOG difference vector at the last scale.
  • 4 is the probability of being the same person, 5 is the probability that the person is not the same person. When 4 is greater than 5, it is judged that the obtained face image sample pair belongs to the same person. When 4 is less than 5, the obtained face image sample pair is judged. Not belonging to the same person.
  • step S102 may include step S105:
  • the aligned face image sample pairs are normalized and stretched.
  • the human image sample pairs are aligned in this step, wherein the face alignment method can be used.
  • the face alignment method can be used.
  • Various methods known to those skilled in the art may be used, such as a human eye positioning algorithm, an Active Shape Model (ASM) algorithm, and an Active Appearance Model (AAM) algorithm.
  • ASM Active Shape Model
  • AAM Active Appearance Model
  • the aligned face image sample pairs are normalized and stretched, wherein the face is
  • the method of normalizing and stretching the image may be performed by various methods known to those skilled in the art, such as image mean-variance normalization and gray-scale hyperbolic tangent stretching method, and maximum-minimum normalization. And the gray scale stretching method of the piecewise linear transformation function.
  • step S105 is further:
  • the aligning the face image sample pairs further comprises: aligning the face image sample pairs by using a human face positioning based face alignment algorithm;
  • the normalizing and stretching processes on the aligned face image sample pairs further include: using the mean-variance normalization and the gray-scale hyperbolic tangent stretching process on the aligned face image sample pairs.
  • the core idea of the alignment algorithm based on human eye localization is to locate the two eye positions in the face region by using the positioning algorithm, and transform the image into a uniform size by panning, zooming, rotating, etc., so that the two images in the mutual comparison are in the image.
  • the eyes are fixed in the same position, the specific steps are as follows:
  • the coordinates of the left eye are (x1, y1), and the coordinates of the right eye are (x2, y2);
  • step 2) The two images obtained in step 2) constitute a new pair of face samples, and the subsequent operations of this patent are performed on pairs of new face samples.
  • the currently obtained face sample image is affected by illumination when it is acquired. This method reduces the influence of illumination to some extent by performing mean-variance normalization and gray-scale hyperbolic tangent stretching on the image.
  • the image mean-variance normalization formula is:
  • g(i,j) represents the gray value at (i,j) after normalization
  • f(i,j) represents the pre-normalization
  • mean(f) represents the average value of the image before normalization
  • var(f) represents the variance of the gray value of the image before normalization
  • Grayscale hyperbolic tangent stretching is performed after the graph mean-variance normalization operation.
  • the face alignment algorithm based on human eye positioning, the mean-variance normalization algorithm and the gray-scale hyperbolic tangent stretching algorithm adopted in the present invention are all performed by using a basic algorithm in the field of mathematics, so that the present invention does not need to undergo complicated A lengthy mathematical process can be implemented.
  • the present invention also provides a face authentication device, as shown in FIG. 5, comprising:
  • the obtaining module 11 is configured to acquire a pair of face image samples
  • Blocking module 12 configured to block the acquired face image sample pairs
  • the calculating module 13 is configured to calculate a local gray level feature vector and a local direction gradient histogram HOG vector for the obtained block;
  • the determining module 14 is configured to calculate, according to the calculated local gray feature vector and the local HOG vector, whether the pair of face image samples belong to the same person.
  • the acquisition module 1 first acquires a face image sample pair; the segmentation module 2 then blocks the acquired face image sample pairs, and the image segmentation process can implement local information on the face image.
  • the acquisition is performed to reduce the computational difficulty of the image data processing; then the calculation module 13 calculates the local gray feature vector and the local direction gradient histogram HOG vector for the obtained block, since the gray feature of the image is the main parameter of the image feature recognition And the local features of the face have stability, while the HOG operates on the local grid unit of the image, which maintains good invariance to the geometric and optical deformation of the image, so that the invention can be effective Avoid interference from factors such as light, expression and age, which effectively improves people The accuracy of the face authentication result; the final judging module 14 calculates whether the pair of face image samples belong to the same person according to the calculated local gray feature vector and the local HOG vector.
  • the face authentication device of the invention effectively avoids interference of factors such as illumination, expression and age, and also significantly improves the accuracy of face authentication.
  • the blocking module 12 can be further configured to perform multi-scale, overlapping partitioning on the acquired pairs of face image samples.
  • the present invention when accepting external images, information of different scales is automatically processed, and partial images at different scales may contain different edges and the like. Therefore, multi-scale blocking strategies are used to extract local information at different scales, thereby supplementing The information caused by single-scale features is missing. At the same time, in order to extract sufficient image features, the present invention also uses an overlapping blocking strategy to segment the face image.
  • the invention not only can solve the influence of illumination, expression and other factors on the authentication result to a certain extent, but also effectively enhances the accuracy of the system for face authentication.
  • the calculation module 13 is further configured to calculate the local gray vector cosine distance as the gray feature vector for the obtained block, and calculate the local HOG vector cosine distance as the HOG vector.
  • the invention proposes a method for combining face image sample pairs by using a combination of local gray feature and local HOG feature, which can not only solve the influence of expression, age, illumination and other factors on the authentication result to some extent, and The face authentication process can be accurately implemented.
  • the present invention is superior to the prior art in the complexity of the time and space of the algorithm.
  • the judging module 14 can be further configured to process the calculated local gray feature vector and the local HOG vector, and adopt an artificial neural network classification model of the softmax regression model to calculate and determine the face image sample pair. Whether it belongs to the same person.
  • the judging module 14 can be further configured to process the calculated local gray feature vector and the local HOG vector, and adopt an artificial neural network classification model of softmax regression to calculate whether the face image sample pair is determined. Belong to the same person.
  • the judging module 14 in the present invention can also use the SVM algorithm, the native bayes, the Gaussian process-based classification algorithm, the adaboost algorithm or the k-nearest neighbor for the calculated local gray feature vector and local HOG vector. Algorithm (KNN), etc., to calculate and judge the face image Whether the sample pair belongs to the same person.
  • KNN Algorithm
  • the artificial neural network algorithm of the softmax regression model in the present invention is a mathematical model simulating the thinking mode of the human brain, which has the functions of learning, memory and error correction. In the present invention, it has the effect of improving the accuracy of face authentication. It takes the extracted face feature vector as the input of the artificial neural network, and then through the training of the network parameters to obtain the face recognition classifier, thereby completing the face authentication.
  • the pre-processing module 15 is connected between the obtaining module 11 and the blocking module 12, as shown in FIG. 6, for performing the face image sample pair. Alignment processing; normalize and stretch the aligned face image sample pairs.
  • the human image sample pairs are aligned in this step; at the same time, in order to strengthen the face image sample pair Consistency, so as to avoid the influence of factors such as illumination, direction and noise.
  • the aligned face image sample pairs are normalized and stretched.
  • the pre-processing module 15 is further configured to perform alignment processing on the face image sample pairs by using a human face alignment based face alignment algorithm; and using mean-variance normalization and grayscale on the aligned face image sample pairs. Hyperbolic tangent stretching treatment.
  • the face alignment algorithm based on human eye positioning, the mean-variance normalization algorithm and the gray-scale hyperbolic tangent stretching algorithm adopted in the present invention are all performed by using a basic algorithm in the field of mathematics, so that the present invention does not need to undergo complicated A lengthy mathematical process can be implemented.
  • the invention utilizes the difference between the local gray scale difference and the local gradient histogram to obtain the difference representation of the human face, and uses the multi-scale feature to describe the face difference more comprehensively.
  • the invention utilizes the local texture information to be robust to illumination, posture and expression, and the complexity of the time and space of the algorithm is also low.
  • the four sub-libraries Fb, Fc, DupI, and DupII achieved 97.91%, 73.20%, 63.71%, and 49.57% authentication rates respectively (error mismatch rate was 0.1%).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention appartient au domaine du traitement d'images et de la reconnaissance de modes, et concerne un procédé et dispositif d'authentification faciale. Le procédé comporte les étapes consistant à: acquérir une paire d'échantillons d'image de visage, diviser la paire acquise d'échantillons d'image de visage en blocs, et calculer, d'après les blocs acquis, un vecteur caractéristique de niveau de gris local et un vecteur local d'histogrammes de gradients orientés (HOG), calculer et déterminer si la paire d'échantillons d'image de visage appartient à une même personne d'après le vecteur caractéristique de niveau de gris local et le vecteur de HOG local calculés. Le procédé et le dispositif d'authentification faciale de la présente invention peuvent éliminer efficacement les facteurs perturbateurs comme la lumière, les expressions et l'âge, et améliorer remarquablement l'exactitude de la reconnaissance faciale.
PCT/CN2016/070828 2015-03-19 2016-01-13 Procédé et dispositif d'authentification faciale WO2016145940A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510121748.1 2015-03-19
CN201510121748.1A CN105447441B (zh) 2015-03-19 2015-03-19 人脸认证方法和装置

Publications (1)

Publication Number Publication Date
WO2016145940A1 true WO2016145940A1 (fr) 2016-09-22

Family

ID=55557601

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/070828 WO2016145940A1 (fr) 2015-03-19 2016-01-13 Procédé et dispositif d'authentification faciale

Country Status (2)

Country Link
CN (1) CN105447441B (fr)
WO (1) WO2016145940A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520215A (zh) * 2018-03-28 2018-09-11 电子科技大学 基于多尺度联合特征编码器的单样本人脸识别方法
CN110059551A (zh) * 2019-03-12 2019-07-26 五邑大学 一种基于图像识别的饭菜自动结账系统
CN110472460A (zh) * 2018-05-11 2019-11-19 北京京东尚科信息技术有限公司 人脸图像处理方法及装置
CN111860454A (zh) * 2020-08-04 2020-10-30 北京深醒科技有限公司 一种基于人脸识别的模型切换算法
CN112069875A (zh) * 2020-07-17 2020-12-11 北京百度网讯科技有限公司 人脸图像的分类方法、装置、电子设备和存储介质
CN112446247A (zh) * 2019-08-30 2021-03-05 北京大学 一种基于多特征融合的低光照人脸检测方法及低光照人脸检测网络
CN112861100A (zh) * 2021-02-08 2021-05-28 北京百度网讯科技有限公司 身份验证方法、装置、设备以及存储介质
CN113705462A (zh) * 2021-08-30 2021-11-26 平安科技(深圳)有限公司 人脸识别方法、装置、电子设备及计算机可读存储介质

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102547820B1 (ko) * 2016-07-11 2023-06-27 삼성전자주식회사 복수의 생체 인증기들을 이용한 사용자 인증 방법 및 그 장치
CN110099601A (zh) * 2016-10-14 2019-08-06 费森瑟有限公司 检测呼吸参数并提供生物反馈的系统和方法
CN107392182B (zh) * 2017-08-17 2020-12-04 宁波甬慧智能科技有限公司 一种基于深度学习的人脸采集识别方法及装置
CN107657216A (zh) * 2017-09-11 2018-02-02 安徽慧视金瞳科技有限公司 基于干扰特征向量数据集的1比1人脸特征向量比对方法
CN107818299A (zh) * 2017-10-17 2018-03-20 内蒙古科技大学 基于融合hog特征和深度信念网络的人脸识别算法
EP3698268A4 (fr) 2017-11-22 2021-02-17 Zhejiang Dahua Technology Co., Ltd. Procédés et systèmes de reconnaissance faciale
CN107992807B (zh) * 2017-11-22 2020-10-30 浙江大华技术股份有限公司 一种基于cnn模型的人脸识别方法及装置
CN108234770B (zh) * 2018-01-03 2020-11-03 京东方科技集团股份有限公司 一种辅助化妆系统、辅助化妆方法、辅助化妆装置
CN109002832B (zh) * 2018-06-11 2021-11-19 湖北大学 一种基于分层特征提取的图像识别方法
CN110866435B (zh) * 2019-08-13 2023-09-12 广州三木智能科技有限公司 一种自相似性梯度朝向直方图的远红外行人训练方法
CN111539271B (zh) * 2020-04-10 2023-05-02 哈尔滨新光光电科技股份有限公司 基于可穿戴设备的人脸识别方法及用于边防的可穿戴人脸检测设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120087552A1 (en) * 2010-10-08 2012-04-12 Micro-Star Int'l Co., Ltd. Facial recognition method for eliminating the effect of noise blur and environmental variations
CN102663413A (zh) * 2012-03-09 2012-09-12 中盾信安科技(江苏)有限公司 一种面向多姿态和跨年龄的人脸图像认证方法
CN103810490A (zh) * 2014-02-14 2014-05-21 海信集团有限公司 一种确定人脸图像的属性的方法和设备
CN103914686A (zh) * 2014-03-11 2014-07-09 辰通智能设备(深圳)有限公司 一种基于证件照与采集照的人脸比对认证方法及系统
CN104077580A (zh) * 2014-07-15 2014-10-01 中国科学院合肥物质科学研究院 一种基于深信度网络的害虫图像自动识别方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6826300B2 (en) * 2001-05-31 2004-11-30 George Mason University Feature based classification
KR100438841B1 (ko) * 2002-04-23 2004-07-05 삼성전자주식회사 이용자 검증 및 데이터 베이스 자동 갱신 방법, 및 이를이용한 얼굴 인식 시스템
CN102622588B (zh) * 2012-03-08 2013-10-09 无锡中科奥森科技有限公司 双验证人脸防伪方法及装置
CN102646190B (zh) * 2012-03-19 2018-05-08 深圳市腾讯计算机系统有限公司 一种基于生物特征的认证方法、装置及系统
CN103440478B (zh) * 2013-08-27 2016-08-10 电子科技大学 一种基于hog特征的人脸检测方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120087552A1 (en) * 2010-10-08 2012-04-12 Micro-Star Int'l Co., Ltd. Facial recognition method for eliminating the effect of noise blur and environmental variations
CN102663413A (zh) * 2012-03-09 2012-09-12 中盾信安科技(江苏)有限公司 一种面向多姿态和跨年龄的人脸图像认证方法
CN103810490A (zh) * 2014-02-14 2014-05-21 海信集团有限公司 一种确定人脸图像的属性的方法和设备
CN103914686A (zh) * 2014-03-11 2014-07-09 辰通智能设备(深圳)有限公司 一种基于证件照与采集照的人脸比对认证方法及系统
CN104077580A (zh) * 2014-07-15 2014-10-01 中国科学院合肥物质科学研究院 一种基于深信度网络的害虫图像自动识别方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RUAN, XIAOHU ET AL.: "AN ASSESSMENT METHODFOR FACE ALIGNMENT BASED ON FEATURE MATCHING", CAAI TRANSACTIONS ON INTELLIGENT SYSYTEM, vol. 10, no. 1, 28 February 2015 (2015-02-28) *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520215A (zh) * 2018-03-28 2018-09-11 电子科技大学 基于多尺度联合特征编码器的单样本人脸识别方法
CN110472460A (zh) * 2018-05-11 2019-11-19 北京京东尚科信息技术有限公司 人脸图像处理方法及装置
CN110059551A (zh) * 2019-03-12 2019-07-26 五邑大学 一种基于图像识别的饭菜自动结账系统
CN112446247A (zh) * 2019-08-30 2021-03-05 北京大学 一种基于多特征融合的低光照人脸检测方法及低光照人脸检测网络
CN112446247B (zh) * 2019-08-30 2022-11-15 北京大学 一种基于多特征融合的低光照人脸检测方法及低光照人脸检测网络
CN112069875A (zh) * 2020-07-17 2020-12-11 北京百度网讯科技有限公司 人脸图像的分类方法、装置、电子设备和存储介质
CN112069875B (zh) * 2020-07-17 2024-05-28 北京百度网讯科技有限公司 人脸图像的分类方法、装置、电子设备和存储介质
CN111860454B (zh) * 2020-08-04 2024-02-09 北京深醒科技有限公司 一种基于人脸识别的模型切换算法
CN111860454A (zh) * 2020-08-04 2020-10-30 北京深醒科技有限公司 一种基于人脸识别的模型切换算法
CN112861100A (zh) * 2021-02-08 2021-05-28 北京百度网讯科技有限公司 身份验证方法、装置、设备以及存储介质
CN112861100B (zh) * 2021-02-08 2023-09-05 北京百度网讯科技有限公司 身份验证方法、装置、设备以及存储介质
CN113705462B (zh) * 2021-08-30 2023-07-14 平安科技(深圳)有限公司 人脸识别方法、装置、电子设备及计算机可读存储介质
CN113705462A (zh) * 2021-08-30 2021-11-26 平安科技(深圳)有限公司 人脸识别方法、装置、电子设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN105447441A (zh) 2016-03-30
CN105447441B (zh) 2019-03-29

Similar Documents

Publication Publication Date Title
WO2016145940A1 (fr) Procédé et dispositif d'authentification faciale
WO2021120752A1 (fr) Procédé et dispositif d'apprentissage de modèle auto-adaptatif basé sur une région, procédé et dispositif de détection d'image, et appareil et support
Gou et al. A joint cascaded framework for simultaneous eye detection and eye state estimation
CN104517104B (zh) 一种基于监控场景下的人脸识别方法及系统
Gunay et al. Automatic age classification with LBP
WO2016150240A1 (fr) Procédé et appareil d'authentification d'identité
Gou et al. Learning-by-synthesis for accurate eye detection
US20100111375A1 (en) Method for Determining Atributes of Faces in Images
Liu et al. Local histogram specification for face recognition under varying lighting conditions
Zhao et al. Cascaded shape space pruning for robust facial landmark detection
Huang et al. A component-based framework for generalized face alignment
CN105760815A (zh) 基于第二代身份证人像和视频人像的异构人脸核实方法
Yu et al. Improvement of face recognition algorithm based on neural network
Rukhiran et al. Effecting of environmental conditions to accuracy rates of face recognition based on IoT solution
Kwaśniewska et al. Face detection in image sequences using a portable thermal camera
CN110969101A (zh) 一种基于hog和特征描述子的人脸检测与跟踪方法
Kwaśniewska et al. Real-time facial feature tracking in poor quality thermal imagery
Hemasree et al. Facial Skin Texture and Distributed Dynamic Kernel Support Vector Machine (DDKSVM) Classifier for Age Estimation in Facial Wrinkles.
Aliradi et al. A novel descriptor (LGBQ) based on Gabor filters
Thomas et al. Real Time Face Mask Detection and Recognition using Python
Li et al. 3D face recognition by constructing deformation invariant image
CN108171750A (zh) 基于视觉的箱子装卸定位识别系统
Martinez et al. Facial landmarking for in-the-wild images with local inference based on global appearance
Arya et al. An Efficient Face Detection and Recognition Method for Surveillance
Ayodele et al. Development of a modified local Binary Pattern-Gabor Wavelet transform aging invariant face recognition system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16764122

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16764122

Country of ref document: EP

Kind code of ref document: A1