WO2018041237A1 - 人脸验证的方法、装置以及存储介质 - Google Patents

人脸验证的方法、装置以及存储介质 Download PDF

Info

Publication number
WO2018041237A1
WO2018041237A1 PCT/CN2017/100070 CN2017100070W WO2018041237A1 WO 2018041237 A1 WO2018041237 A1 WO 2018041237A1 CN 2017100070 W CN2017100070 W CN 2017100070W WO 2018041237 A1 WO2018041237 A1 WO 2018041237A1
Authority
WO
WIPO (PCT)
Prior art keywords
glasses
daily
face
document
photo
Prior art date
Application number
PCT/CN2017/100070
Other languages
English (en)
French (fr)
Inventor
梁亦聪
李季檩
汪铖杰
丁守鸿
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018041237A1 publication Critical patent/WO2018041237A1/zh
Priority to US16/208,183 priority Critical patent/US10922529B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the field of machine identification technologies, and in particular, to a method, an apparatus, and a storage medium for face verification.
  • the face verification algorithm can automatically verify two face photos to determine whether they are the same person. This method can be used for user face identity verification in multiple scenarios such as internet finance.
  • the process of identity verification mostly compares the user's ID photo with the photo taken by the user on the spot through the camera, and confirms whether it is the same person through the feature comparison of the two photos.
  • the embodiment of the present application provides a method for face verification, which can effectively compare the photo of the photo and the photo of the glasses, and improve the face verification. Convenience.
  • the embodiments of the present application also provide corresponding devices and storage media.
  • a first aspect of the present application provides a method for face verification, including:
  • the daily face feature is extracted from the daily photos of the designated object by using the verification model, and the ID photo of the specified object is extracted Extracting the face features of the document;
  • the specified object passes verification.
  • a second aspect of the present application provides an apparatus for face verification, including one or more processors and a storage medium storing operation instructions. When the operation instructions in the storage medium are executed, the processor performs the following steps:
  • a third aspect of the present application provides a computer readable storage medium having stored thereon program instructions for performing the above method.
  • the method of the present invention provides a method for face verification, which can extract corresponding photos from the face photos of the glasses through the corresponding verification model.
  • the face features are compared with the facial features in the photo ID, thereby realizing an effective comparison of the photo of the photo and the photo of the glasses, which improves the convenience of face verification.
  • FIG. 1 is a schematic diagram of an embodiment of a face verification system in an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an embodiment of a method for face verification in an embodiment of the present application
  • FIG. 3 is a schematic diagram of a process of generating a glasses segmentation model in the embodiment of the present application.
  • FIG. 4 is a schematic diagram of a process of generating a document-face verification model in the embodiment of the present application.
  • FIG. 5 is a schematic diagram of a process of generating a document-eyeglass face verification model in the embodiment of the present application.
  • FIG. 6 is another schematic diagram of a process of generating a document-eyeglass face verification model in the embodiment of the present application.
  • FIG. 7 is a schematic diagram of an embodiment of a face verification process in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an embodiment of a device for face verification according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an embodiment of a server in an embodiment of the present application.
  • the embodiment of the present application provides a method for face verification, which can effectively compare the photos of the photo and the photos of the glasses, and improves the convenience of face verification.
  • the embodiment of the present application also provides a corresponding device. The details are described below separately.
  • the photo of the ID in the embodiment of the present application may be a photo for verifying identity on various documents, such as an ID card photo, a social security card photo, a passport photo, a driver's license photo, and a passport photo.
  • the designated object can be the person who handles the corresponding business in various business scenarios, that is, the person who presents the certificate.
  • the embodiment of the present application provides a face verification system.
  • the face verification system includes a first image collector 10, a second image collector 20, a server 30, and a network 40.
  • the first image is collected.
  • Both the camera 10 and the second image collector 20 can be a camera.
  • the first image collector 10 is used to capture the face image of the person presenting the document, that is, to take a current photo of the person.
  • the second image collector 20 is used for The photo of the certificate on the document 50 is collected, and then the first image collector 10 and the second image collector 20 are connected.
  • the current photo and ID photo of the person is transmitted to the server 30 via the network 40, and the server 30 verifies the current photo and the photo of the person.
  • the server 30 will use the current photo of the person as a daily photograph of the designated object, and will verify the photo of the person's ID as the photo of the designated object.
  • the photo of the ID on the document can also be collected by the first image collector 10, and the second image collector 20 takes a picture of the current photo of the person.
  • the specific use of the first image collector 10 and the second image collector 20 in the embodiment of the present application is not limited.
  • the face verification process of the server 30 for the ID of the specified object and the daily photo of the specified object can be understood by referring to FIG. 2 , as shown in FIG. 2 , an embodiment of the method for face verification provided by the embodiment of the present application include:
  • the server it is necessary to determine whether the person in the daily photograph of the specified object wears glasses by region recognition, because the face of the opaque portion such as the frame, the frame, the nose pad of the glasses is blocked, and the lens is caused by the lens. Strongly reflected light can also cause partially occluded areas on the person's face, so the server can determine if the person in the daily photo has glasses by identifying those areas that may be obscured by the glasses.
  • the first threshold may be set in advance, and the blocked area is smaller than the first A threshold value can be considered that the person in the daily photo does not wear glasses, and if the blocked area is larger than the first threshold, the person in the daily photo can be considered to wear glasses.
  • the eyeglass segmentation model is obtained by performing a Convolutional Neural Network (CNN) training on a plurality of daily photographs of the labeled face of the glasses, the labeled glasses.
  • the area covered by the glasses has been marked in the daily photo of the face.
  • CNN Convolutional Neural Network
  • the recognition result indicates that the area of the glasses area is greater than a first threshold, extracting a daily facial feature from the daily photo of the designated object by using a verification model, and extracting from the photo of the designated object Document face features.
  • the recognition result indicates that the area of the glasses area is greater than the first threshold, it indicates that the person in the daily photo is wearing glasses.
  • the verification model in the embodiment of the present application includes a document-face verification model and a document-glass face verification model.
  • both the document-face verification model and the document-eyeglass face verification model can extract the face features of the document, so both the document-face verification model and the document-eyeglass face verification model can be used.
  • the document-face verification model is obtained by performing CNN training on a plurality of daily face photos without glasses and document photos of person objects in the same object set, the same object set being the same as the non-glasses A collection of person objects corresponding to daily face photos.
  • the document-eyeglass face verification model is obtained by performing a feature regression CNN adjustment on the document-face verification model by using a glasses area occlusion photo set and a glasses-free daily face feature set, wherein the glasses area occlusion photo set is Performing an alignment using a daily photo album without glasses and a daily photo collection with glasses, determining an occlusion region of the corresponding spectacles in each photo of the non-glasses daily photo collection, and occluding the occlusion region,
  • the glasses-free daily facial feature set is obtained by feature extraction of each photo in the non-glasses daily photo collection using the document-face verification model.
  • the specified object passes verification.
  • the matching degree refers to the degree to which the daily facial features of the specified object are similar to the facial features of the document, and the degree of similarity may also be similarity, and the similarity may use the Euclidean distance, the cos distance, or the joint Bayesian method or The metric learning method is used for calculation.
  • the method for face verification provided by the embodiment of the present application can extract the corresponding person from the face photo of the wearing glasses through the corresponding verification model.
  • the face features are compared with the facial features in the photo ID, thereby realizing an effective comparison of the photo of the document and the photo of the glasses, which improves the convenience of face verification.
  • the glasses segmentation model, the document-face verification model, and the document-glass face verification model are mentioned.
  • the glasses segmentation model, the document-face verification model, and the document-glass face verification model are described below with reference to the accompanying drawings. Training process.
  • FIG. 3 is a schematic diagram of a process of generating a glasses segmentation model according to an embodiment of the present application.
  • the occluded area may include a face area of an opaque portion such as a frame, a frame, a nose pad, and the occluded face area caused by the strong reflected light caused by the lens.
  • step 203 Obtain the labeled face data set of the glasses by the labeling of step 202.
  • the glasses segmentation CNN training process includes convolution, batch_normalization, deconvolution, etc.
  • the training optimization goal is that the number of error pixels compared with the labeling result is as small as possible.
  • the definition of specific convolutional layer, normalized layer, deconvolution layer and CNN training method can be found in the deep neural network training framework.
  • the glasses segmentation model provided by the embodiments of the present application provides the possibility of recognizing the daily photos of the faces of the glasses, thereby realizing effective comparison of the photo of the documents and the photos of the glasses, and improving the convenience of face verification.
  • the document-face verification model in the embodiment of the present application is introduced below with reference to FIG. 4.
  • the process of generating the document-face verification model may include:
  • the document-face verification model provided by the embodiment of the present application can identify a passport photo and a daily face photo without glasses, and the document-face verification model can quickly identify a face image without glasses, thereby improving the photo.
  • the speed of recognition can identify a passport photo and a daily face photo without glasses, and the document-face verification model can quickly identify a face image without glasses, thereby improving the photo. The speed of recognition.
  • the document-eyeglass face verification model in the embodiment of the present application is described below with reference to FIG. 5.
  • the process of generating the document-eyeglass face verification model may include:
  • the face feature is extracted for each of the "no glasses daily photo collection 3", and the "glassless daily facial feature set 3" is obtained.
  • the process of feature regression CNN adjustment is to extract the first face feature from the face photo of “glass area occlusion photo collection” using the document-face verification model, and use the document-face verification model from the “glassless daily face feature set”.
  • the second face feature is extracted from the face image of 3"
  • the Euclidean distance of the first facial feature and the second facial feature is determined
  • the document-face verification model is adjusted according to the Euclidean distance
  • the adjustment process may be by convolution Layer, normalization layer, deconvolution layer and deep neural network are used to fine-tune the document-face verification model.
  • the goal of the adjustment is to make the first face feature and the first part extracted by the document-face verification model.
  • the Euclidean distance of the two face features is as small as possible.
  • the adjustment process can be repeated multiple times.
  • the document with the smallest Euclidean distance between the first face feature and the second face feature is the document to be obtained. - Glasses face verification model.
  • the document-eyeglass face verification model provided by the embodiment of the present application can identify a face photo of wearing glasses, and improves the convenience of face verification.
  • the verification process of the document-face provided by the embodiment of the present application may include:
  • the occlusion area of the glasses is not greater than the threshold, it indicates that the designated object in the photo is not wearing glasses, and the daily photo is verified by using the document-face verification model.
  • step 416 Obtain the face of the glasses by the occlusion of step 415.
  • the document photo of the designated object is processed.
  • step 423 Perform feature extraction of step 422 to obtain a document face feature.
  • the verification step may be:
  • the verification result may be that when the feature distance of the face occlusion feature and the document face feature is less than a preset value, the two are considered to be matched, and if the feature occlusion feature and the document face feature are greater than the preset value, It is considered that the two do not match and cannot be verified.
  • the using the verification model to extract the daily facial features from the daily photos of the designated object, and extracting the document facial features from the identification photos of the designated objects may include:
  • the document face feature is extracted from the document photo of the specified object by a document-face verification model, and the document-face verification model is used to extract a face feature from a face image without glasses.
  • the document-face verification model can be used for feature extraction, because the characters in the daily photo wear glasses, so only the document-glass face verification model is used.
  • Feature extraction can accurately extract facial features in glasses photos.
  • the process of feature extraction is basically the same, except that the models used are different, which ensures the accuracy of feature extraction in glasses photos.
  • the using the document-eyeglass face verification model to extract the daily facial features of the glasses from the daily photos may include:
  • the daily facial features of the glasses are extracted from the daily photos blocked by the glasses area.
  • modifying the pixel value of the glasses area indicated by the recognition result may be setting the pixel value of the glasses area to the gray value 128.
  • the device 50 for face verification in the embodiment of the present application is described below with reference to the accompanying drawings.
  • the device 50 for face verification in the embodiment of the present application may be the server shown in FIG. 1 or a function module in the server.
  • an embodiment of the device 50 for face verification provided by the embodiment of the present application includes:
  • An obtaining unit 501 configured to acquire a photo of the ID of the specified object and a daily photo of the specified object
  • the identification unit 502 is configured to perform a spectacles region recognition on the daily photograph of the specified object acquired by the acquiring unit 501 by using a spectacles segmentation model, where the spectacles segmentation model is used to identify the occlusion by the spectacles region;
  • the feature extraction unit 503 is configured to: if the recognition result obtained by the recognition unit 502 indicates that the area of the glasses area is greater than a first threshold, extract a daily face feature from the daily photos of the designated object by using a verification model, Extracting a face feature of the document in the photo of the specified object;
  • the verification unit 504 is configured to verify the daily face feature extracted by the feature extraction unit 503 and the document face feature, and if the matching degree of the two is greater than the second threshold, determine that the specified object passes the verification.
  • the obtaining unit 501 acquires the photo of the ID of the specified object and the daily photograph of the specified object; the identifying unit 502 performs the identification of the glasses in the daily photograph of the specified object acquired by the acquiring unit 501 by using the glasses segmentation model.
  • the glasses segmentation model is used to identify an area that is blocked by the glasses; and the feature extraction unit 503, if the recognition result obtained by the recognition unit 502 indicates that the area of the glasses region is greater than the first threshold, Extracting a daily face feature from a daily photograph of the specified object using a verification model, extracting a document face feature from the document photo of the specified object; and verifying the daily face extracted by the feature extraction unit 503 by the verification unit 503
  • the feature is verified with the document face feature, and if the matching degree of the two is greater than the second threshold, determining that the specified object passes the verification.
  • the device for face verification provided by the embodiment of the present application can extract the corresponding person from the face photo of the wearing glasses by using the corresponding verification model.
  • the face features are compared with the facial features in the photo ID, thereby realizing an effective comparison of the photo of the document and the photo of the glasses, which improves the convenience of face verification.
  • the feature extraction unit 503 is configured to:
  • the document face feature is extracted from the document photo of the specified object by a document-face verification model, and the document-face verification model is used to extract a face feature from a face image without glasses.
  • the feature extraction unit 503 is configured to:
  • the daily facial features of the glasses are extracted from the daily photos blocked by the glasses area.
  • the glasses segmentation model is obtained by performing a convolutional neural network CNN training on a plurality of daily photographs of the labeled face of the eyeglasses, and the labeled daily face of the eyeglasses has been marked by the glasses. region.
  • the document-face verification model is obtained by performing CNN training on a plurality of glasses-free daily face photos and document photos of the person objects in the same object set, the same object set being A collection of person objects corresponding to daily face photos without glasses.
  • the document-eyeglass face verification model is obtained by using a glasses area occlusion photo set and a glasses-free daily face feature set, and performing feature regression CNN adjustment on the document-face verification model.
  • the spectacles area occlusion photo collection is performed by using a non-glasses daily photo collection and a spectacles daily photo collection, and an occlusion area corresponding to the spectacles is determined in each photo of the non-glasses daily photo collection, and the occlusion area is determined
  • the non-glasses daily facial feature set is obtained by performing feature extraction on each photo of the non-glasses daily photo collection using the document-face verification model.
  • the device 50 for face verification provided by the embodiment of the present application can be understood by referring to the corresponding description in the parts of FIG. 1 to FIG. 7 , and details are not repeated herein.
  • FIG. 9 is a schematic structural diagram of a server 60 according to an embodiment of the present application.
  • the server 60 is applied to a face verification system including the image collector shown in FIG. 1 and the server 60, the server 60 including one or more processors 610, a memory 650, and an input/ Output device 630, which may include read only memory and random access memory, and provides operational instructions and data to processor 610.
  • a portion of the memory 650 can also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • memory 650 stores elements, executable modules or data structures, or a subset thereof, or their extended set.
  • the processor 610 calls the operation instruction stored in the memory 650 (the operation instruction can be stored in the operating system),
  • the specified object passes verification.
  • the server provided by the embodiment of the present application can extract the corresponding facial features from the face photos of the glasses through the corresponding verification model, compared with the problem in the prior art that the photo of the photo is not effectively compared with the photo of the glasses. Compared with the facial features in the photo, the effective comparison of the photo and the photos of the glasses is realized, which improves the convenience of face verification.
  • the processor 610 controls the operation of the server 60, which may also be referred to as a CPU (Central Processing Unit).
  • the memory 650 can include a read only memory and a random access memory. The memory is provided and instructions and data are provided to the processor 610. A portion of the memory 650 can also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the specific components of the server 60 are coupled together by a bus system 620 in a specific application.
  • the bus system 620 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. However, for clarity of description, various buses are labeled as bus system 620 in the figure.
  • Processor 610 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 610 or an instruction in a form of software.
  • the processor 610 described above may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or discrete hardware. Component.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 650, and the processor 610 reads the information in the memory 650 and performs the steps of the above method in combination with its hardware.
  • processor 610 is further configured to:
  • the document face feature is extracted from the document photo of the specified object by a document-face verification model, and the document-face verification model is used to extract a face feature from a face image without glasses.
  • processor 610 is further configured to:
  • the daily facial features of the glasses are extracted from the daily photos blocked by the glasses area.
  • the glasses segmentation model is obtained by performing a convolutional neural network CNN training on a plurality of labeled daily faces of the glasses, and the labeled glasses are marked in the daily photos of the face. The area covered by the glasses.
  • the document-face verification model is obtained by performing CNN training on a plurality of glasses-free daily face photos and document photos of the person objects in the same object set, the same object set being A collection of person objects corresponding to daily face photos without glasses.
  • the document-eyeglass face verification model is obtained by performing feature regression CNN adjustment on the document-face verification model by using a glasses area occlusion photo set and a glasses-free daily face feature set.
  • the occlusion photo collection is performed by using a non-glasses daily photo collection and a daily photo collection of glasses, in which each occlusion region of the corresponding spectacles is determined, and the occlusion region is occluded.
  • the glasses-free daily facial feature set is obtained by performing feature extraction on each photo of the non-glasses daily photo collection using the document-face verification model.
  • the storage medium may include: a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computer Security & Cryptography (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Geometry (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种人脸验证的方法,包括:获取指定对象的证件照片和所述指定对象的日常照片;使用眼镜分割模型对所述指定对象的日常照片进行眼镜区域识别,以得到眼镜区域的识别结果,所述眼镜分割模型用于识别被眼镜遮挡的区域;若所述识别结果指示所述眼镜区域的面积大于第一阈值,则用验证模型从所述指定对象的日常照片中提取出日常人脸特征,从所述指定对象的证件照片中提取出证件人脸特征;若所述日常人脸特征与所述证件人脸特征的匹配度大于第二阈值,则所述指定对象通过验证。本申请实施例提供的种人脸验证的方法,实现了有效的对比证件照和戴眼镜的照片,提高了人脸验证的便利度。

Description

人脸验证的方法、装置以及存储介质
本申请要求于2016年8月31日提交中国专利局、申请号为201610796542.3、发明名称为“一种人脸验证的方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及机器识别技术领域,具体涉及一种人脸验证的方法、装置以及存储介质。
背景技术
人脸验证算法可全自动对两张人脸照片进行验证,判别是否为同一人。这种方式可用于互联网金融等多个场景下的用户人脸身份核实。身份核实的过程多是将用户的证件照和用户在现场通过摄像头拍摄的照片进行比对,通过这两张照片的特征比对确认是否为同一个人。
近年来近视人群逐年增加,此外随着眼镜的饰物属性的不断增强,戴眼镜的用户越来越多。但证件照拍摄过程中需要摘除眼镜,因此,准确判别戴眼镜的照片与不戴眼镜的证件照是否为同一人,具有越来越重大的意义。
发明内容
为了解决现有技术中无法有效比对证件照与戴眼镜的照片的问题,本申请实施例提供一种人脸验证的方法,可以有效的对比证件照和戴眼镜的照片,提高了人脸验证的便利度。本申请实施例还提供了相应的装置以及存储介质。
本申请第一方面提供一种人脸验证的方法,包括:
获取指定对象的证件照片和所述指定对象的日常照片;
使用眼镜分割模型对所述指定对象的日常照片进行眼镜区域识别,以得到眼镜区域的识别结果,所述眼镜分割模型用于识别被眼镜遮挡的区域;
若所述识别结果指示所述眼镜区域的面积大于第一阈值,则用验证模型从所述指定对象的日常照片中提取出日常人脸特征,从所述指定对象的证件照片 中提取出证件人脸特征;
若所述日常人脸特征与所述证件人脸特征的匹配度大于第二阈值,则所述指定对象通过验证。
本申请第二方面提供一种人脸验证的装置,包括一个或多个处理器和存储有操作指令的存储介质,当运行所述存储介质中的操作指令时,所述处理器执行如下步骤:
获取指定对象的证件照片和所述指定对象的日常照片;
使用眼镜分割模型对获取的所述指定对象的日常照片进行眼镜区域识别,以得到眼镜区域的识别结果,所述眼镜分割模型用于识别被眼镜遮挡的区域;
若所述识别结果指示所述眼镜区域的面积大于第一阈值,则用验证模型从所述指定对象的日常照片中提取出日常人脸特征,从所述指定对象的证件照片中提取出证件人脸特征;
对所述提取的日常人脸特征与所述证件人脸特征进行验证,如果所述日常人脸特征与所述证件人脸特征的匹配度大于第二阈值,则确定所述指定对象通过验证。
本申请的第三方面提供一种计算机可读存储介质,在所述计算机可读存储介质上存储有用于执行上述方法的程序指令。
与现有技术中无法有效比对证件照与戴眼镜的照片的问题,本申请实施例提供一种人脸验证的方法,通过相应的验证模型可以从戴眼镜的人脸照片中提取出相应的人脸特征,与证件照中的人脸特征进行比对,从而实现了有效的对比证件照和戴眼镜的照片,提高了人脸验证的便利度。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例中人脸验证系统的一实施例示意图;
图2是本申请实施例中人脸验证的方法的一实施例示意图;
图3是本申请实施例中眼镜分割模型的生成过程示意图;
图4是本申请实施例中证件-人脸验证模型的生成过程示意图;
图5是本申请实施例中证件-眼镜人脸验证模型的生成过程的一示意图;
图6是本申请实施例中证件-眼镜人脸验证模型的生成过程的另一示意图;
图7是本申请实施例中人脸验证过程的一实施例示意图;
图8是本申请实施例中人脸验证的装置的一实施例示意图;
图9是本申请实施例中服务器的一实施例示意图。
具体实施方式
本申请实施例提供一种人脸验证的方法,可以有效的对比证件照和戴眼镜的照片,提高了人脸验证的便利度。本申请实施例还提供了相应的装置。以下分别进行详细说明。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例中的证件照片可以是各种证件上用来证明身份的照片,例如身份证照片、社保卡照片、护照照片、驾驶证照片,以及通行证照片等。
指定对象可以是各种业务场景中办理相应业务的人,也就是出示证件的本人。
在现实生活中例如办理金融业务、机场安检以及海关通关等很多场景都需要核对证件上的照片与本人是否是同一个人,现实生活中都是通过工作人员的人眼进行核对,导致人力消耗非常大,而且证件上的照片都是不戴眼镜的,而实际生活中,很多人都是近视眼,也有很多人喜欢将眼镜作为饰物佩戴,导致各场景的工作人员很难快速验证出证件上的照片与本人是否是同一人。因此,若能通过机器对证件照片和出示证件照片的本人进行验证,将具有重大的意义。
因此,本申请实施例提供一种人脸验证系统,如图1所示,该人脸验证系统包括第一图像采集器10、第二图像采集器20、服务器30和网络40,第一图像采集器10和第二图像采集器20都可以是摄像头,第一图像采集器10用于采集出示证件的本人的人脸图像,也就是拍摄一张本人的当前照片,第二图像采集器20用于采集证件50上的证件照片,然后第一图像采集器10和第二图像采集器20通 过网络40将该人的当前照片和证件照片传送给服务器30,服务器30对该人的当前照片和证件照片进行验证。服务器30会将该人的当前照片作为指定对象的日常照片,会将该人的证件照片作为指定对象的证件照片进行验证。当然,也可以由第一图像采集器10采集证件上的证件照片,第二图像采集器20拍摄一张本人的当前照片。本申请实施例中对第一图像采集器10和第二图像采集器20的具体使用不做限定。
服务器30对指定对象的证件照片和所述指定对象的日常照片的人脸验证过程可以参阅图2进行理解,如图2所示,本申请实施例所提供的人脸验证的方法的一实施例包括:
101、获取指定对象的证件照片和所述指定对象的日常照片。
102、使用眼镜分割模型对所述指定对象的日常照片进行眼镜区域识别,以得到眼镜区域的识别结果,所述眼镜分割模型用于识别被眼镜遮挡的区域。
对于服务器来说,需要通过区域识别来确定该指定对象的日常照片中的人物是否戴眼镜,因为眼镜的镜框、镜架、鼻托等不透明部分的人脸都是被遮挡的,镜片所造成的强反射光也可能造成人脸上有部分被遮挡的区域,所以服务器可以通过对这些可能会被眼镜遮挡的区域进行识别来判定该日常照片中的人物是否有戴眼镜。
考虑到不戴眼镜时人脸的眼睛周围也可能因为其他情况有被遮挡的情况,但通常不戴眼镜被遮挡的面积不会很大,所以可以预先设定第一阈值,被遮挡面积小于第一阈值的可以认为日常照片中的人物没有戴眼镜,如果被遮挡面积大于第一阈值则可以认为日常照片中的人物有戴眼镜。
在眼镜区域识别过程中需要使用眼镜分割模型,眼镜分割模型是通过对多张带标注的眼镜人脸日常照片进行卷积神经网络(Convolutional Neural Network,CNN)训练得到的,所述带标注的眼镜人脸日常照片中已经标注出被眼镜所遮挡的区域。
103、若所述识别结果指示所述眼镜区域的面积大于第一阈值,则用验证模型从所述指定对象的日常照片中提取出日常人脸特征,从所述指定对象的证件照片中提取出证件人脸特征。
若所述识别结果指示所述眼镜区域的面积大于第一阈值,则说明日常照片中的人物是戴眼镜的。
本申请实施例中的验证模型包括证件-人脸验证模型和证件-眼镜人脸验证模型。
因为已经确定日常照片中的人物是戴眼镜的,所以需要使用证件-眼镜人脸验证模型从所述指定对象的日常照片中提取出日常人脸特征。证件-人脸验证模型和证件-眼镜人脸验证模型都可以提取出证件人脸特征,所以使用证件-人脸验证模型和证件-眼镜人脸验证模型均可。
所述证件-人脸验证模型是通过对多张不戴眼镜的日常人脸照片和同一对象集合中人物对象的证件照片进行CNN训练得到的,所述同一对象集合为与所述不戴眼镜的日常人脸照片对应的人物对象的集合。
所述证件-眼镜人脸验证模型是使用眼镜区域遮挡照片集和无眼镜日常人脸特征集,对所述证件-人脸验证模型进行特征回归CNN调整得到的,所述眼镜区域遮挡照片集是使用无眼镜日常照片集和有眼镜日常照片集进行比对,在所述无眼镜日常照片集的每张照片中确定出对应眼镜的遮挡区域,并对所述遮挡区域进行遮挡得到的,所述无眼镜日常人脸特征集是使用所述证件-人脸验证模型对所述无眼镜日常照片集中的每张照片进行特征提取得到的。
104、若所述日常人脸特征与所述证件人脸特征的匹配度大于第二阈值,则所述指定对象通过验证。
匹配度指的是该指定对象的日常人脸特征与证件人脸特征相似的程度,相似的程度,也可以成为相似度,相似度可以使用欧式距离、cos距离、或者使用联合贝叶斯方法或度量学习方法进行计算。
与现有技术中无法有效比对证件照与戴眼镜的照片的问题,本申请实施例提供的人脸验证的方法,通过相应的验证模型可以从戴眼镜的人脸照片中提取出相应的人脸特征,与证件照中的人脸特征进行比对,从而实现了有效的对比证件照和戴眼镜的照片,提高了人脸验证的便利度。
上述实施例中提及了眼镜分割模型、证件-人脸验证模型和证件-眼镜人脸验证模型,下面结合附图介绍眼镜分割模型、证件-人脸验证模型和证件-眼镜人脸验证模型的训练过程。
如图3所示,图3为本申请实施例中眼镜分割模型的生成过程示意图。
201、搜集多张不同人物戴眼镜的人脸日常照片,构建“眼镜人脸照数据集1”。
202、对“眼镜人脸照数据集1”中的每一张人脸照片,人工标注出被眼镜遮挡的区域。
被遮挡的区域可以包括镜框、镜架、鼻托等不透明部分的人脸区域,镜片所造成的强反射光造成的被遮挡的人脸区域。
203、通过步骤202的标注,得到带标注的眼镜人脸数据集。
204、利用“带标注的眼镜人脸数据集”进行眼镜分割CNN训练。
眼镜分割CNN训练过程包括卷积层(convolution)、归一化层(batch_normalization)、反卷积层(deconvolution)等,训练优化目标为分割结果与标注结果相比的错误像素数尽可能小。具体的卷积层、归一化层、反卷积层的定义及CNN的训练方式参见深度神经网络训练框架。
205、通过步骤204的训练,获得眼镜分割模型。
本申请实施例所提供的眼镜分割模型为戴眼镜的人脸日常照片的识别提供了可能,从而实现了有效的对比证件照和戴眼镜的照片,提高了人脸验证的便利度。
下面结合图4介绍本申请实施例中的证件-人脸验证模型,如图4所示,证件-人脸验证模型的生成过程可以包括:
301、搜集多张不戴眼镜的日常人脸照片,构建“人脸照数据集2”。
302、搜集“人脸照数据集2”中人脸照中人物的证件人脸照,构建“证件照数据集1”。
303、利用“证件照数据集1”和“人脸照数据集2”进行人脸验证CNN训练。
304、通过步骤303的CNN训练,获得证件-人脸验证模型。
本申请实施例所提供的证件-人脸验证模型可以识别证件照和不戴眼镜的日常人脸照片,通过该证件-人脸验证模型可以快速识别不戴眼镜的人脸照片,从而提高了照片识别的速度。
下面结合图5介绍本申请实施例中证件-眼镜人脸验证模型,如图5所示,证件-眼镜人脸验证模型的生成过程可以包括:
401、搜集多个人物的多张不戴眼镜的自拍人脸照,构建“无眼镜日常照片集3”。
402、使用证件-人脸验证模型对“无眼镜日常照片集3”中的每一张照片提取人脸特征,获得“无眼镜日常人脸特征集3”。
403、对“无眼镜日常照片集3”中的每一张人脸照进行遮挡,得到眼镜区域遮挡照片集。
对“无眼镜日常照片集3”中的每一张人脸照A,找出“眼镜人脸照数据集1”中一张双眼位置与其相近的人脸照片B,将A中与B中人工标注出被眼镜遮挡的区域所对应的区域记为C,将A中C所在位置处的像素置为灰度值128的纯灰度像素。
对“无眼镜日常照片集3”中所有照片进行如上“遮挡合成”操作,得到眼镜区域遮挡照片集。
404、利用“眼镜区域遮挡照片集”和“无眼镜日常人脸特征集3”对证件-人脸验证模型进行“特征回归CNN调整”,获得证件-眼镜人脸验证模型。
特征回归CNN调整的过程就是使用证件-人脸验证模型从“眼镜区域遮挡照片集”的人脸照片中提取第一人脸特征,使用证件-人脸验证模型从“无眼镜日常人脸特征集3”的人脸照片中提取第二人脸特征,确定第一人脸特征和第二人脸特征的欧式距离,根据该欧式距离调整证件-人脸验证模型,调整的过程可以是通过卷积层、归一化层、反卷积层和深度神经网络等方式对证件-人脸验证模型做微调,调整的目标就是使通过证件-人脸验证模型所提取出的第一人脸特征和第二人脸特征的欧式距离尽可能小,该调整过程可以是反复多次的调整,达到第一人脸特征和第二人脸特征的欧式距离最小的证件-人脸验证模型就是要得到的证件-眼镜人脸验证模型。
本申请实施例所提供的证件-眼镜人脸验证模型可以识别戴眼镜的人脸照片,提高了人脸验证的便利度。
为了更直观的了解本申请实施例中证件-眼镜人脸验证模型的生成过程,下面结合图6再做简单描述:
如图6所示,通过图5中步骤401的搜集,构建“无眼镜日常照片集3”6A,然后使用“证件-人脸验证模型”6B对“无眼镜日常照片集3”6A,进行特征提取,得到“无眼镜日常人脸特征集3”6C。对“无眼镜日常照片集3”6A中的每一张人脸照A,找出“眼镜人脸照数据集1”6D中一张双眼位置与其相近的人脸照片B,将A中与B中人工标注出被眼镜遮挡的区域所对应的区域记为C,将A中C所在位置处的像素置为灰度值128的纯灰度像素。对“无眼镜日常照片集3”6A中所有照片进行如上“遮挡合成”操作,得到眼镜区域遮挡照片集6E。利用“眼 镜区域遮挡照片集”6E和“无眼镜日常人脸特征集3”6C对证件-人脸验证模型进行“特征回归CNN调整”,获得“证件-眼镜人脸验证模型”6F。
以上是对几个模型生成过程的描述,生成以上几个模型后,就可以使用这几个模型进行人脸验证。
人脸验证的过程可以参阅图7进行理解,如图7所示,本申请实施例提供的证件-人脸的验证过程可以包括:
411、获取指定对象的日常照片。
412、使用眼镜分割模型对所述指定对象的日常照片进行眼镜区域识别。
413、确定眼镜所遮挡面积是否大于阈值,若否执行414,若是执行415。
414、当眼镜所遮挡面积不大于阈值时,说明该照片中的指定对象未戴眼镜,使用证件-人脸验证模型对该日常照片进行验证。
415、当眼镜所遮挡面积大于阈值时,说明该照片中的指定对象戴了眼镜,对该日常照片中的眼镜区域进行遮挡,并设置遮挡标识。
416、通过步骤415的遮挡,得到眼镜人脸遮挡照。
417、通过证件-眼镜人脸验证模型对该眼镜人脸遮挡照进行特征提取。
418、通过步骤417的特征提取得到眼镜人脸遮挡特征。
对指定对象的日常照片进行上述411至418的处理后,再对该指定对象的证件照片进行处理。
421、获取该指定对象的证件照片。
422、通过证件-人脸验证模型对该指定对象的证件照片进行特征提取。
423、通过步骤422的特征提取,得到证件人脸特征。
通过步骤421至423得到证件人脸特征后,对眼镜人脸遮挡特征和证件人脸特征进行验证,验证步骤可以是:
431、对眼镜人脸遮挡特征和证件人脸特征进行特征距离计算。
432、通过步骤431的特征距离计算,得到验证结果。
验证结果可以是当人脸遮挡特征和证件人脸特征进行特征距离小于预设值时,认为两者匹配,通过验证,如果人脸遮挡特征和证件人脸特征进行特征距离大于预设值,则认为两者不匹配,不能通过验证。
可选地,所述用验证模型从所述指定对象的日常照片中提取出日常人脸特征,从所述指定对象的证件照片中提取出证件人脸特征,可以包括:
用证件-眼镜人脸验证模型从所述日常照片中提取出戴眼镜的日常人脸特征,所述证件-眼镜人脸验证模型用于从戴眼镜的人脸照片中提取出人脸特征;
用证件-人脸验证模型从所述指定对象的证件照片中提取出所述证件人脸特征,所述证件-人脸验证模型用于从不戴眼镜的人脸照片中提取出人脸特征。
本申请实施例中,因为证件照片中的人物没有戴眼镜,所以可以使用证件-人脸验证模型进行特征提取,因为日常照片中的人物戴了眼镜,所以只有使用证件-眼镜人脸验证模型进行特征提取才能准确的提取出戴眼镜照片中的人脸特征。特征提取的过程基本都是相同的,只是所使用的模型不同,保证了戴眼镜照片中特征提取的准确性。
可选地,所述用证件-眼镜人脸验证模型从所述日常照片中提取出戴眼镜的日常人脸特征,可以包括:
修改所述识别结果所指示的眼镜区域的像素值,得到所述眼镜区域遮挡的日常照片;
用证件-眼镜人脸验证模型,从所述眼镜区域遮挡的日常照片中提取出戴眼镜的日常人脸特征。
本申请实施例中,修改所述识别结果所指示的眼镜区域的像素值可以是将眼镜区域的像素值设置为灰度值128。
下面结合附图介绍本申请实施例中人脸验证的装置50,本申请实施例中的人脸验证的装置50可以是图1中所示的服务器,也可以是服务器中的功能模块。
如图8所示,本申请实施例提供的人脸验证的装置50的一实施例包括:
获取单元501,用于获取指定对象的证件照片和所述指定对象的日常照片;
识别单元502,用于使用眼镜分割模型对所述获取单元501获取的所述指定对象的日常照片进行眼镜区域识别,以得到眼镜区域的识别结果,所述眼镜分割模型用于识别被眼镜遮挡的区域;
特征提取单元503,用于若所述识别单元502得到的识别结果指示所述眼镜区域的面积大于第一阈值,则用验证模型从所述指定对象的日常照片中提取出日常人脸特征,从所述指定对象的证件照片中提取出证件人脸特征;
验证单元504,用于对所述特征提取单元503提取的日常人脸特征与所述证件人脸特征进行验证,如果二者的匹配度大于第二阈值,则确定所述指定对象通过验证。
本申请实施例中,获取单元501获取指定对象的证件照片和所述指定对象的日常照片;识别单元502使用眼镜分割模型对所述获取单元501获取的所述指定对象的日常照片进行眼镜区域识别,以得到眼镜区域的识别结果,所述眼镜分割模型用于识别被眼镜遮挡的区域;特征提取单元503若所述识别单元502得到的识别结果指示所述眼镜区域的面积大于第一阈值,则用验证模型从所述指定对象的日常照片中提取出日常人脸特征,从所述指定对象的证件照片中提取出证件人脸特征;验证单元504对所述特征提取单元503提取的日常人脸特征与所述证件人脸特征进行验证,如果二者的匹配度大于第二阈值,则确定所述指定对象通过验证。与现有技术中无法有效比对证件照与戴眼镜的照片的问题,本申请实施例提供的人脸验证的装置,通过相应的验证模型可以从戴眼镜的人脸照片中提取出相应的人脸特征,与证件照中的人脸特征进行比对,从而实现了有效的对比证件照和戴眼镜的照片,提高了人脸验证的便利度。
可选地,所述特征提取单元503用于:
用证件-眼镜人脸验证模型从所述日常照片中提取出戴眼镜的日常人脸特征,所述证件-眼镜人脸验证模型用于从戴眼镜的人脸照片中提取出人脸特征;
用证件-人脸验证模型从所述指定对象的证件照片中提取出所述证件人脸特征,所述证件-人脸验证模型用于从不戴眼镜的人脸照片中提取出人脸特征。
可选地,所述特征提取单元503用于:
修改所述识别结果所指示的眼镜区域的像素值,得到所述眼镜区域遮挡的日常照片;
用证件-眼镜人脸验证模型,从所述眼镜区域遮挡的日常照片中提取出戴眼镜的日常人脸特征。
可选地,所述眼镜分割模型是通过对多张带标注的眼镜人脸日常照片进行卷积神经网络CNN训练得到的,所述带标注的眼镜人脸日常照片中已经标注出眼镜所遮挡的区域。
可选地,所述证件-人脸验证模型是通过对多张不戴眼镜的日常人脸照片和同一对象集合中人物对象的证件照片进行CNN训练得到的,所述同一对象集合为与所述不戴眼镜的日常人脸照片对应的人物对象的集合。
可选地,所述证件-眼镜人脸验证模型是使用眼镜区域遮挡照片集和无眼镜日常人脸特征集,对所述证件-人脸验证模型进行特征回归CNN调整得到的,所 述眼镜区域遮挡照片集是使用无眼镜日常照片集和有眼镜日常照片集进行比对,在所述无眼镜日常照片集的每张照片中确定出对应眼镜的遮挡区域,并对所述遮挡区域进行遮挡得到的,所述无眼镜日常人脸特征集是使用所述证件-人脸验证模型对所述无眼镜日常照片集中的每张照片进行特征提取得到的。
本申请实施例所提供的人脸验证的装置50可以参阅图1至图7部分的相应描述进行理解,本处不再重复赘述。
图9是本申请实施例提供的服务器60的结构示意图。所述服务器60应用于人脸验证系统,所述人脸验证系统包括图1所示的图像采集器以及所述服务器60,所述服务器60包括一个或多个处理器610、存储器650和输入/输出设备630,存储器650可以包括只读存储器和随机存取存储器,并向处理器610提供操作指令和数据。存储器650的一部分还可以包括非易失性随机存取存储器(NVRAM)。
在一些实施方式中,存储器650存储了如下的元素,可执行模块或者数据结构,或者他们的子集,或者他们的扩展集。
在本申请实施例中,处理器610通过调用存储器650存储的操作指令(该操作指令可存储在操作系统中),
获取指定对象的证件照片和所述指定对象的日常照片;
使用眼镜分割模型对所述指定对象的日常照片进行眼镜区域识别,以得到眼镜区域的识别结果,所述眼镜分割模型用于识别被眼镜遮挡的区域;
若所述识别结果指示所述眼镜区域的面积大于第一阈值,则用验证模型从所述指定对象的日常照片中提取出日常人脸特征,从所述指定对象的证件照片中提取出证件人脸特征;
若所述日常人脸特征与所述证件人脸特征的匹配度大于第二阈值,则所述指定对象通过验证。
与现有技术中无法有效比对证件照与戴眼镜的照片的问题相比,本申请实施例提供的服务器,通过相应的验证模型可以从戴眼镜的人脸照片中提取出相应的人脸特征,与证件照中的人脸特征进行比对,从而实现了有效的对比证件照和戴眼镜的照片,提高了人脸验证的便利度。
处理器610控制服务器60的操作,处理器610还可以称为CPU(Central Processing Unit,中央处理单元)。存储器650可以包括只读存储器和随机存取存 储器,并向处理器610提供指令和数据。存储器650的一部分还可以包括非易失性随机存取存储器(NVRAM)。具体的应用中服务器60的各个组件通过总线系统620耦合在一起,其中总线系统620除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统620。
上述本申请实施例揭示的方法可以应用于处理器610中,或者由处理器610实现。处理器610可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器610中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器610可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器650,处理器610读取存储器650中的信息,结合其硬件完成上述方法的步骤。
可选地,处理器610还用于:
用证件-眼镜人脸验证模型从所述日常照片中提取出戴眼镜的日常人脸特征,所述证件-眼镜人脸验证模型用于从戴眼镜的人脸照片中提取出人脸特征;
用证件-人脸验证模型从所述指定对象的证件照片中提取出所述证件人脸特征,所述证件-人脸验证模型用于从不戴眼镜的人脸照片中提取出人脸特征。
可选地,处理器610还用于:
修改所述识别结果所指示的眼镜区域的像素值,得到所述眼镜区域遮挡的日常照片;
用证件-眼镜人脸验证模型,从所述眼镜区域遮挡的日常照片中提取出戴眼镜的日常人脸特征。
可选地,所述眼镜分割模型是通过对多张带标注的眼镜人脸日常照片进行卷积神经网络CNN训练得到的,所述带标注的眼镜人脸日常照片中已经标注出 眼镜所遮挡的区域。
可选地,所述证件-人脸验证模型是通过对多张不戴眼镜的日常人脸照片和同一对象集合中人物对象的证件照片进行CNN训练得到的,所述同一对象集合为与所述不戴眼镜的日常人脸照片对应的人物对象的集合。
可选地,所述证件-眼镜人脸验证模型是使用眼镜区域遮挡照片集和无眼镜日常人脸特征集,对所述证件-人脸验证模型进行特征回归CNN调整得到的,所述眼镜区域遮挡照片集是使用无眼镜日常照片集和有眼镜日常照片集进行比对,在所述无眼镜日常照片集的每张照片中确定出对应眼镜的遮挡区域,并对所述遮挡区域进行遮挡得到的,所述无眼镜日常人脸特征集是使用所述证件-人脸验证模型对所述无眼镜日常照片集中的每张照片进行特征提取得到的。
以上图9所描述的服务器可以参阅图1至图8部分的相应描述进行理解,本处不再重复赘述。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以借助硬件来执行与所述方法相应的程序指令来完成,该程序指令可以存储于一计算机可读存储介质中,存储介质可以包括:ROM、RAM、磁盘或光盘等。
以上对本申请实施例所提供的人脸验证的方法以及装置进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (15)

  1. 一种人脸验证的方法,其特征在于,包括:
    获取指定对象的证件照片和所述指定对象的日常照片;
    使用眼镜分割模型对所述指定对象的日常照片进行眼镜区域识别,以得到眼镜区域的识别结果,所述眼镜分割模型用于识别被眼镜遮挡的区域;
    若所述识别结果指示所述眼镜区域的面积大于第一阈值,则用验证模型从所述指定对象的日常照片中提取出日常人脸特征,从所述指定对象的证件照片中提取出证件人脸特征;
    若所述日常人脸特征与所述证件人脸特征的匹配度大于第二阈值,则所述指定对象通过验证。
  2. 根据权利要求1所述的方法,其特征在于,所述用验证模型从所述指定对象的日常照片中提取出日常人脸特征,从所述指定对象的证件照片中提取出证件人脸特征,包括:
    用证件-眼镜人脸验证模型从所述日常照片中提取出戴眼镜的日常人脸特征,所述证件-眼镜人脸验证模型用于从戴眼镜的人脸照片中提取出人脸特征;
    用证件-人脸验证模型从所述指定对象的证件照片中提取出所述证件人脸特征,所述证件-人脸验证模型用于从不戴眼镜的人脸照片中提取出人脸特征。
  3. 根据权利要求2所述的方法,其特征在于,所述用证件-眼镜人脸验证模型从所述日常照片中提取出戴眼镜的日常人脸特征,包括:
    修改所述识别结果所指示的眼镜区域的像素值,得到所述眼镜区域遮挡的日常照片;
    用证件-眼镜人脸验证模型,从所述眼镜区域遮挡的日常照片中提取出戴眼镜的日常人脸特征。
  4. 根据权利要求2或3所述的方法,其特征在于,通过对多张带标注的眼镜人脸日常照片进行卷积神经网络训练得到所述眼镜分割模型,所述带标注的眼镜人脸日常照片中已经标注出眼镜所遮挡的区域。
  5. 根据权利要求4所述的方法,其特征在于,通过对多张不戴眼镜的日常人脸照片和同一对象集合中人物对象的证件照片进行卷积神经网络训练得到所述证件-人脸验证模型,所述同一对象集合为与所述不戴眼镜的日常人脸照片对应的人物对象的集合。
  6. 根据权利要求5所述的方法,其特征在于,所述证件-眼镜人脸验证模型是使用眼镜区域遮挡照片集和无眼镜日常人脸特征集,对所述证件-人脸验证模型进行特征回归卷积神经网络调整得到。
  7. 根据权利要求6所述的方法,其特征在于,通过将无眼镜日常照片集和有眼镜日常照片集进行比对,在所述无眼镜日常照片集的每张照片中确定出对应眼镜的遮挡区域,并对所述遮挡区域进行遮挡得到所述眼镜区域遮挡照片集;通过使用所述证件-人脸验证模型对所述无眼镜日常照片集中的每张照片进行特征提取得到所述无眼镜日常人脸特征集。
  8. 一种人脸验证的装置,其特征在于,包括一个或多个处理器和存储有操作指令的存储介质,当运行所述存储介质中的操作指令时,所述处理器执行如下步骤::
    获取指定对象的证件照片和所述指定对象的日常照片;
    使用眼镜分割模型对获取的所述指定对象的日常照片进行眼镜区域识别,以得到眼镜区域的识别结果,所述眼镜分割模型用于识别被眼镜遮挡的区域;
    若所述识别结果指示所述眼镜区域的面积大于第一阈值,则用验证模型从所述指定对象的日常照片中提取出日常人脸特征,从所述指定对象的证件照片中提取出证件人脸特征;
    对提取的日常人脸特征与所述证件人脸特征进行验证,如果所述日常人脸特征与所述证件人脸特征的匹配度大于第二阈值,则确定所述指定对象通过验证。
  9. 根据权利要求8所述的装置,其特征在于,所述处理器还用于:
    用证件-眼镜人脸验证模型从所述日常照片中提取出戴眼镜的日常人脸特征,所述证件-眼镜人脸验证模型用于从戴眼镜的人脸照片中提取出人脸特征;
    用证件-人脸验证模型从所述指定对象的证件照片中提取出所述证件人脸特征,所述证件-人脸验证模型用于从不戴眼镜的人脸照片中提取出人脸特征。
  10. 根据权利要求9所述的装置,其特征在于,
    所述处理器还用于:
    修改所述识别结果所指示的眼镜区域的像素值,得到所述眼镜区域遮挡的日常照片;
    用证件-眼镜人脸验证模型,从所述眼镜区域遮挡的日常照片中提取出戴眼 镜的日常人脸特征。
  11. 根据权利要求9或10所述的装置,其特征在于,通过对多张带标注的眼镜人脸日常照片进行卷积神经网络训练得到所述眼镜分割模型,所述带标注的眼镜人脸日常照片中已经标注出眼镜所遮挡的区域。
  12. 根据权利要求11所述的装置,其特征在于,通过对多张不戴眼镜的日常人脸照片和同一对象集合中人物对象的证件照片进行卷积神经网络训练得到所述证件-人脸验证模型,所述同一对象集合为与所述不戴眼镜的日常人脸照片对应的人物对象的集合。
  13. 根据权利要求12所述的装置,其特征在于,通过使用眼镜区域遮挡照片集和无眼镜日常人脸特征集,对所述证件-人脸验证模型进行特征回归卷积神经网络调整得到所述证件-眼镜人脸验证模型。
  14. 根据权利要求13所述的装置,其特征在于,通过将无眼镜日常照片集和有眼镜日常照片集进行比对,在所述无眼镜日常照片集的每张照片中确定出对应眼镜的遮挡区域,并对所述遮挡区域进行遮挡得到所述眼镜区域遮挡照片集;通过使用所述证件-人脸验证模型对所述无眼镜日常照片集中的每张照片进行特征提取得到所述无眼镜日常人脸特征集。
  15. 一种计算机可读存储介质,在所述计算机可读存储介质上存储有用于执行如权利要求1-7中的任一项所述的方法的程序指令。
PCT/CN2017/100070 2016-08-31 2017-08-31 人脸验证的方法、装置以及存储介质 WO2018041237A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/208,183 US10922529B2 (en) 2016-08-31 2018-12-03 Human face authentication method and apparatus, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610796542.3 2016-08-31
CN201610796542.3A CN106407912B (zh) 2016-08-31 2016-08-31 一种人脸验证的方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/208,183 Continuation-In-Part US10922529B2 (en) 2016-08-31 2018-12-03 Human face authentication method and apparatus, and storage medium

Publications (1)

Publication Number Publication Date
WO2018041237A1 true WO2018041237A1 (zh) 2018-03-08

Family

ID=58001925

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/100070 WO2018041237A1 (zh) 2016-08-31 2017-08-31 人脸验证的方法、装置以及存储介质

Country Status (3)

Country Link
US (1) US10922529B2 (zh)
CN (1) CN106407912B (zh)
WO (1) WO2018041237A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733570A (zh) * 2019-10-14 2021-04-30 北京眼神智能科技有限公司 眼镜检测的方法、装置、电子设备及存储介质
CN113221086A (zh) * 2021-05-21 2021-08-06 深圳和锐网络科技有限公司 离线人脸认证方法、装置、电子设备及存储介质

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407912B (zh) * 2016-08-31 2019-04-02 腾讯科技(深圳)有限公司 一种人脸验证的方法及装置
CN107276974B (zh) * 2017-03-10 2020-11-03 创新先进技术有限公司 一种信息处理方法及装置
CN107729822A (zh) * 2017-09-27 2018-02-23 北京小米移动软件有限公司 对象识别方法及装置
CN107862270B (zh) * 2017-10-31 2020-07-21 深圳云天励飞技术有限公司 人脸分类器训练方法、人脸检测方法及装置、电子设备
CN109753850B (zh) * 2017-11-03 2022-10-25 富士通株式会社 面部识别模型的训练方法和训练设备
CN109934062A (zh) * 2017-12-18 2019-06-25 比亚迪股份有限公司 眼镜摘除模型的训练方法、人脸识别方法、装置和设备
CN108446737B (zh) * 2018-03-21 2022-07-05 百度在线网络技术(北京)有限公司 用于识别对象的方法和装置
TWI684918B (zh) * 2018-06-08 2020-02-11 和碩聯合科技股份有限公司 臉部辨識系統以及加強臉部辨識方法
CN111079480A (zh) * 2018-10-19 2020-04-28 北京金山云网络技术有限公司 身份证信息的识别方法、装置及终端设备
CN111353943B (zh) * 2018-12-20 2023-12-26 杭州海康威视数字技术股份有限公司 一种人脸图像恢复方法、装置及可读存储介质
SE1851630A1 (en) * 2018-12-20 2020-06-21 Precise Biometrics Ab Methods for biometrics verification using a mobile device
CN109784255B (zh) * 2019-01-07 2021-12-14 深圳市商汤科技有限公司 神经网络训练方法及装置以及识别方法及装置
CN110135583B (zh) * 2019-05-23 2020-08-21 北京地平线机器人技术研发有限公司 标注信息的生成方法、标注信息的生成装置和电子设备
CN110569826B (zh) * 2019-09-18 2022-05-24 深圳市捷顺科技实业股份有限公司 一种人脸识别方法、装置、设备及介质
CN113052851A (zh) * 2019-12-27 2021-06-29 上海昕健医疗技术有限公司 基于深度学习的医学图像处理方法、系统以及计算机设备
US11748057B2 (en) 2020-02-26 2023-09-05 Samsung Electronics Co., Ltd. System and method for personalization in intelligent multi-modal personal assistants
CN111598046A (zh) * 2020-05-27 2020-08-28 北京嘉楠捷思信息技术有限公司 人脸遮挡检测方法及人脸遮挡检测装置
CN111814603B (zh) * 2020-06-23 2023-09-05 汇纳科技股份有限公司 一种人脸识别方法、介质及电子设备
US11676390B2 (en) * 2020-10-23 2023-06-13 Huawei Technologies Co., Ltd. Machine-learning model, methods and systems for removal of unwanted people from photographs
CN112395580A (zh) * 2020-11-19 2021-02-23 联通智网科技有限公司 一种认证方法、装置、系统、存储介质和计算机设备
CN112488647A (zh) * 2020-11-25 2021-03-12 京东方科技集团股份有限公司 考勤系统及方法、存储介质及电子设备
CN117173008A (zh) * 2021-03-17 2023-12-05 福建库克智能科技有限公司 混合物的制作方法、混合物及人脸面具的图片的生成方法
US20230103129A1 (en) * 2021-09-27 2023-03-30 ResMed Pty Ltd Machine learning to determine facial measurements via captured images
CN115240265B (zh) * 2022-09-23 2023-01-10 深圳市欧瑞博科技股份有限公司 用户智能识别方法、电子设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914686A (zh) * 2014-03-11 2014-07-09 辰通智能设备(深圳)有限公司 一种基于证件照与采集照的人脸比对认证方法及系统
CN104156700A (zh) * 2014-07-26 2014-11-19 佳都新太科技股份有限公司 基于活动形状模型和加权插值法的人脸图像眼镜去除方法
CN106407912A (zh) * 2016-08-31 2017-02-15 腾讯科技(深圳)有限公司 一种人脸验证的方法及装置

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7391900B2 (en) * 2002-10-31 2008-06-24 Korea Institute Of Science And Technology Image processing method for removing glasses from color facial images
US7596247B2 (en) * 2003-11-14 2009-09-29 Fujifilm Corporation Method and apparatus for object recognition using probability models
US7653221B2 (en) * 2006-01-31 2010-01-26 Fujifilm Corporation Method and apparatus for automatic eyeglasses detection and removal
KR20090093223A (ko) * 2008-02-29 2009-09-02 홍익대학교 산학협력단 얼굴 인식 시스템의 성능 향상을 위한 동적 마스크와 인페인팅을 이용한 안경제거
CN102034079B (zh) * 2009-09-24 2012-11-28 汉王科技股份有限公司 眼镜遮挡下的人脸识别方法和系统
US8515127B2 (en) * 2010-07-28 2013-08-20 International Business Machines Corporation Multispectral detection of personal attributes for video surveillance
KR101373274B1 (ko) * 2012-11-08 2014-03-11 주식회사 크라스아이디 안경 제거를 통한 얼굴 인식 방법 및 상기 안경 제거를 통한 얼굴 인식 방법을 이용한 얼굴 인식 장치
CN104112114B (zh) * 2013-10-30 2018-10-30 北京安捷天盾科技发展有限公司 身份验证方法和装置
CN104751108B (zh) * 2013-12-31 2019-05-17 汉王科技股份有限公司 人脸图像识别装置和人脸图像识别方法
CN104408426B (zh) * 2014-11-27 2018-07-24 小米科技有限责任公司 人脸图像眼镜去除方法及装置
CN104463172B (zh) * 2014-12-09 2017-12-22 重庆中科云丛科技有限公司 基于人脸特征点形状驱动深度模型的人脸特征提取方法
CN104463128B (zh) * 2014-12-17 2017-09-29 智慧眼(湖南)科技发展有限公司 用于人脸识别的眼镜检测方法及系统
KR101956071B1 (ko) * 2015-01-13 2019-03-08 삼성전자주식회사 사용자 인증 방법 및 장치
CN105844206A (zh) * 2015-01-15 2016-08-10 北京市商汤科技开发有限公司 身份认证方法及设备
CN104751143B (zh) * 2015-04-02 2018-05-11 北京中盾安全技术开发公司 一种基于深度学习的人证核验系统及方法
CN105095856B (zh) * 2015-06-26 2019-03-22 上海交通大学 基于掩模的有遮挡人脸识别方法
CN105117692A (zh) * 2015-08-05 2015-12-02 福州瑞芯微电子股份有限公司 一种基于深度学习的实时人脸识别方法及系统
CN105184253B (zh) * 2015-09-01 2020-04-24 北京旷视科技有限公司 一种人脸识别方法和人脸识别系统
CN105046250B (zh) * 2015-09-06 2018-04-20 广州广电运通金融电子股份有限公司 人脸识别的眼镜消除方法
CN105139000B (zh) * 2015-09-16 2019-03-12 浙江宇视科技有限公司 一种去除眼镜痕迹的人脸识别方法及装置
CN105631403B (zh) * 2015-12-17 2019-02-12 小米科技有限责任公司 人脸识别方法及装置
CN105868689B (zh) * 2016-02-16 2019-03-29 杭州景联文科技有限公司 一种基于级联卷积神经网络的人脸遮挡检测方法
US9990780B2 (en) * 2016-10-03 2018-06-05 Ditto Technologies, Inc. Using computed facial feature points to position a product model relative to a model of a face
CN108664782B (zh) * 2017-03-28 2023-09-12 三星电子株式会社 面部验证方法和设备
CN107609481B (zh) * 2017-08-14 2020-11-20 百度在线网络技术(北京)有限公司 为人脸识别生成训练数据的方法、装置和计算机存储介质
CN109753850B (zh) * 2017-11-03 2022-10-25 富士通株式会社 面部识别模型的训练方法和训练设备
CN108846355B (zh) * 2018-06-11 2020-04-28 腾讯科技(深圳)有限公司 图像处理方法、人脸识别方法、装置和计算机设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914686A (zh) * 2014-03-11 2014-07-09 辰通智能设备(深圳)有限公司 一种基于证件照与采集照的人脸比对认证方法及系统
CN104156700A (zh) * 2014-07-26 2014-11-19 佳都新太科技股份有限公司 基于活动形状模型和加权插值法的人脸图像眼镜去除方法
CN106407912A (zh) * 2016-08-31 2017-02-15 腾讯科技(深圳)有限公司 一种人脸验证的方法及装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733570A (zh) * 2019-10-14 2021-04-30 北京眼神智能科技有限公司 眼镜检测的方法、装置、电子设备及存储介质
CN112733570B (zh) * 2019-10-14 2024-04-30 北京眼神智能科技有限公司 眼镜检测的方法、装置、电子设备及存储介质
CN113221086A (zh) * 2021-05-21 2021-08-06 深圳和锐网络科技有限公司 离线人脸认证方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN106407912B (zh) 2019-04-02
US20190114467A1 (en) 2019-04-18
CN106407912A (zh) 2017-02-15
US10922529B2 (en) 2021-02-16

Similar Documents

Publication Publication Date Title
WO2018041237A1 (zh) 人脸验证的方法、装置以及存储介质
US11288504B2 (en) Iris liveness detection for mobile devices
US9454700B2 (en) Feature extraction and matching for biometric authentication
US20200019760A1 (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
US9922238B2 (en) Apparatuses, systems, and methods for confirming identity
US9361681B2 (en) Quality metrics for biometric authentication
KR20180109634A (ko) 얼굴 인증 방법 및 장치
CN105930709A (zh) 人脸识别技术应用于人证一致性检验的方法及装置
CN107368806B (zh) 图像矫正方法、装置、计算机可读存储介质和计算机设备
TW200905577A (en) Iris recognition system
JP5730044B2 (ja) 顔画像認証装置
CN111105368B (zh) 图像处理方法及其装置、电子设备和计算机可读存储介质
CN108875472B (zh) 图像采集装置及基于该图像采集装置的人脸身份验证方法
KR101280439B1 (ko) 현금인출기 카메라에서 취득된 얼굴 영상에 대한 얼굴 인식 가능성 판단 방법
Lee et al. Improvements in video-based automated system for iris recognition (vasir)
Merkle et al. State of the art of quality assessment of facial images
KR20210050649A (ko) 모바일 기기의 페이스 인증 방법
Mohammad Multi-Modal Ocular Recognition in Presence of Occlusion in Mobile Devices
JPWO2020053984A1 (ja) 生体認証装置、偽造判別プログラム及び偽造判別方法
KR20200034018A (ko) 적외선 영상 기반 얼굴 인식 방법 및 이를 위한 학습 방법
KR102579610B1 (ko) Atm 이상행동감지 장치 및 그 장치의 구동방법
US11335123B2 (en) Live facial recognition system and method
KR102439216B1 (ko) 인공지능 딥러닝 모델을 이용한 마스크 착용 얼굴 인식 방법 및 서버
US20240021020A1 (en) Methods and Systems for Secure Face Liveness Detection
Parab et al. Face Recognition-Based Automatic Hospital Admission with SMS Alerts

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17845539

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17845539

Country of ref document: EP

Kind code of ref document: A1