WO2019024636A1 - 身份认证的方法、系统和装置 - Google Patents

身份认证的方法、系统和装置 Download PDF

Info

Publication number
WO2019024636A1
WO2019024636A1 PCT/CN2018/093787 CN2018093787W WO2019024636A1 WO 2019024636 A1 WO2019024636 A1 WO 2019024636A1 CN 2018093787 W CN2018093787 W CN 2018093787W WO 2019024636 A1 WO2019024636 A1 WO 2019024636A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
light image
face
infrared light
visible light
Prior art date
Application number
PCT/CN2018/093787
Other languages
English (en)
French (fr)
Inventor
梁添才
王丹
许丹丹
金晓峰
章烈剽
Original Assignee
广州广电运通金融电子股份有限公司
广州广电卓识智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州广电运通金融电子股份有限公司, 广州广电卓识智能科技有限公司 filed Critical 广州广电运通金融电子股份有限公司
Publication of WO2019024636A1 publication Critical patent/WO2019024636A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Definitions

  • the present invention relates to the field of pattern recognition and artificial intelligence technologies, and in particular, to a method, system and apparatus for identity authentication.
  • identity authentication there are two main methods for identity authentication.
  • One is to check the basic information of the ID card in the public security Internet database. The efficiency of the verification method is compared by manually comparing the information between the holder and the ID card. Low and low recognition accuracy.
  • Another method is to use an identity authentication device to simultaneously collect an image of the ID card, and capture the image of the licensee by using visible light, and then compare the collected ID image with the image of the holder, and when the comparison result is consistent, Verification passed.
  • a method system for identity authentication comprising:
  • a method system for identity authentication comprising:
  • An image obtaining module configured to obtain an ID image of a person to be authenticated, a visible light image of a face, and a near-infrared light image of a face;
  • a feature vector obtaining module configured to input the ID card image, the human face visible light image, and the human face near-infrared light image into a pre-completed Training Triplet CNN model, and extract the ID card image, the person a convolution feature of the visible light image of the face and the near-infrared light image of the face, to obtain a corresponding feature vector;
  • a similarity calculation module configured to separately calculate, according to the feature vector, a similarity between the ID image, the human visible light image, and any two of the human near-infrared light images;
  • the identity authentication determining module is configured to determine the consistency of the ID card image, the human face visible light image, and the human face near-infrared light image according to the similarity, and output an identity authentication result.
  • a computer readable storage medium having stored thereon a computer program, wherein the program, when executed by the processor, implements the following steps:
  • An apparatus for authenticating an identity comprising: an outer casing; a display screen mounted on a front side of the outer casing; a binocular camera mounted on a top surface side of the outer casing; and being mounted on a bottom surface side of the outer casing ID card image capture device and a processor mounted inside the outer casing;
  • the binocular camera is configured to collect a visible light image of a face and a near-infrared light image of a face of a person to be authenticated;
  • the ID card image collection device is configured to collect an ID card image of a person to be authenticated
  • the processor is configured to perform the following steps:
  • the display screen is used to display the authentication result.
  • the ID image of the person to be authenticated, the visible light image of the face and the near-infrared light image of the face are obtained, and then the collected ID image, the visible light image of the face and the near-infrared light image of the face are input into the Triplets which are pre-completed for training.
  • the convolution features of the ID card image, the human face visible light image and the face near-infrared light image are extracted to obtain corresponding feature vectors; the ID card image, the human face visible image and the face are respectively calculated according to the feature vector
  • the similarity of any two images in the infrared light image; the consistency of the ID image, the human face visible light image and the face near infrared light image is judged according to the similarity, and the identity authentication result is output.
  • the effective combination of the Triplets and the deep convolutional neural network CNN model can improve the robustness of identity authentication, thereby improving the accuracy of identity authentication.
  • FIG. 1 is a schematic flowchart of a method for identity authentication according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a method for identity authentication according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a method for identity authentication according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a Triplets CNN model in the method for identity authentication according to the present invention.
  • FIG. 5 is a schematic flowchart of a method for identity authentication according to an embodiment of the present invention.
  • 6a, 6b, and 6c are schematic diagrams of selecting a triplet image satisfying a condition from the triplet image according to a preset triplet selection condition in the method for identity authentication according to the present invention
  • FIG. 7 is a schematic flowchart of a method for identity authentication according to an embodiment of the present invention.
  • FIG. 8 is a schematic flowchart of a method for identity authentication according to an embodiment of the present invention.
  • FIG. 9 is a schematic flowchart of an apparatus for identity authentication according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of an apparatus for identity authentication according to an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of an identity authentication method according to an embodiment of the present invention. As shown in FIG. 1 , the method for identity authentication in the embodiment of the present invention includes the following steps:
  • Step S110 obtaining an ID card image of the person to be authenticated, a visible light image of the face, and a near-infrared light image of the face.
  • the person to be authenticated is any person who needs to receive identity authentication.
  • the binocular camera simultaneously collects the visible light image of the face and the near-infrared light image of the face of the person to be authenticated, and then detects the collected visible light image of the face and the near-infrared light image of the face through the face detector. Out of the face area.
  • the ID image of the person to be authenticated can be obtained directly by the card reading device or by scanning the ID card of the person to be authenticated by the scanning device.
  • the ID image of the person to be authenticated may be obtained by other commonly used image acquisition methods.
  • Step S120 inputting the ID card image, the human face visible light image, and the face near infrared light image into the pre-completed Training Triplet CNN model, and extracting the convolution characteristics of the ID card image, the human face visible light image, and the face near infrared light image. , get the corresponding feature vector.
  • the CNN-related algorithm extracts the convolution characteristics of the ID card image, the human face visible light image and the human face near-infrared light image respectively, and obtains the feature vectors of each image in these images, that is, the ID card image is calculated by the Triplets CNN model. Feature vector of human face visible light image and human face near infrared light image.
  • Step S130 calculating similarities of any two of the ID image, the human visible light image, and the human near-infrared light image according to the feature vector.
  • Step S140 judging the consistency of the ID card image, the human face visible light image, and the face near infrared light image according to the similarity, and outputting the identity authentication result.
  • the feature vectors of the ID image, the human face visible light image, and the human face near-infrared light image are respectively calculated between the ID card image, the human face visible light image, and the human face near-infrared light image. Similarity, and then according to the result of the similarity, the consistency of the ID image, the human face visible image and the face near infrared light image is judged, and the identity authentication result is output.
  • the identity authentication result may be that the ID card image of the person to be tested is consistent with the face image, that is, “authentication unity”; or the ID card image of the person to be tested is inconsistent with the face image.
  • the method for identity authentication described above first collects an ID image of a person to be authenticated, a visible light image of a face, and a near-infrared light image of a face, and then inputs the acquired ID image, a visible light image of the face, and a near-infrared light image of the face to the image.
  • the convolution features of the ID card image, the human face visible light image and the face near-infrared light image are extracted to obtain corresponding feature vectors; the ID card image and the human face visible light are respectively calculated according to the feature vector.
  • the similarity between any two images in the image and the near-infrared light image of the face; the consistency of the ID image, the visible light image of the face and the near-infrared light image of the face is judged according to the similarity, and the identity authentication result is output.
  • the effective combination of Triplets and the deep convolutional neural network CNN model can effectively improve the robustness of identity authentication, thereby improving the accuracy of identity authentication.
  • the method further includes:
  • Step S150 using a quality assessment algorithm to perform quality assessment on the ID card image, the human face visible light image, and the face near infrared light image.
  • the image quality is easily affected by various factors such as ambient light (intensity or low light) and focal length alignment when the camera collects images.
  • the quality of the image acquired at one time is poor (such as low resolution), or the image collected by the ID card collection device is blurred, and these images will affect the accuracy of the later identity authentication. Therefore, after collecting the face image and the ID card image, the image should be evaluated for quality to see if it is suitable for later identity authentication. If the image quality is poor for late authentication, the face image and the ID card image are re-acquired.
  • the quality assessment algorithm is used to perform quality assessment on the face images and ID cards collected in the field.
  • after the face image is collected it can also be detected in vivo to ensure that the collected face image is a live image collected from the person to be tested, instead of using the image of the person to be tested or the like. .
  • Step S160 performing image pre-processing on the ID card image, the human face visible light image, and the face near-infrared light image when the image quality of the ID card, the visible light image of the face, and the near-infrared light image of the face meet the requirements.
  • the ID image, the visible light image of the face, and the near-infrared light image of the face are preprocessed, for example, the identity
  • the image, the visible light image of the face and the near-infrared light image of the face are processed by the face, and the resolution of the ID card image, the visible light image of the face and the near-infrared light image of the face are processed to make the resolution consistent.
  • Preprocessing these images can effectively enhance the accuracy of the extracted image features, thereby increasing the accuracy of identity authentication.
  • the pre-completed training of the Triplets CNN model is obtained by the following steps:
  • Step S170 pre-acquisition of a plurality of ID card images, a visible light image of the face corresponding to the ID card image, and a near-infrared light image of the face, and constructing training with multiple images of the identity, the visible image of the face, and the near-infrared light image of the face.
  • the set of triplet images, the triplet image includes a reference sample image, a homogeneous sample image, and a heterogeneous sample image.
  • the triple is based on the concept of the Triplet loss function of metric learning.
  • a triplet is composed of an anchor sample (ie, a reference sample), a positive sample (ie, a similar sample), and a negative sample (ie, a heterogeneous sample).
  • the triples are three examples, such as (anchor, positive, negative), where a and p are the same class, and a and n are different classes. Then the process of learning is to learn a representation, for as many triples as possible, the distance between the anchor and the positive is smaller than the distance between the anchor and the negative. which is:
  • x i a represents a reference sample
  • x i p represents a homogeneous sample
  • x i n represents a heterogeneous sample
  • represents a specific threshold, between 0.0 and 1.0, with a suggested value of 0.2.
  • the inequality essentially defines the distance relationship between the homogeneous sample and the heterogeneous sample, that is, the distance between the same type of samples + the threshold ⁇ is smaller than the distance between the heterogeneous samples. Convert the above formula to get the objective function based on Triplets:
  • the meaning of the objective function is to optimize for the triples that do not satisfy the condition; for the triples that satisfy the condition, it does not matter.
  • a plurality of ID card images, a human face visible light image corresponding to the ID card image, and a face near infrared light image are first collected, and the plurality of identity images, the human face visible image, and the face near infrared light image are collected. Construct a triplet image of the training set.
  • Step S180 selecting a triad image that satisfies the condition from the triplet image according to the preset triplet selection condition.
  • the preset triplet selection condition may be one image of any one of an ID image, a human face visible light image, and a human face near infrared light image as a reference sample, and other types of images. For the same kind or heterogeneous samples, select the image with the farthest distance from the reference sample in the same sample and the image with the closest distance to the reference sample in the heterogeneous sample to generate a triple image that satisfies the condition.
  • the triplet selection condition can be designed according to the actual requirements in the identity authentication process, and the selection method is not unique.
  • step S190 the triad image satisfying the condition is input into the CNN model for training, and the trained completed Triplets CNN model is obtained.
  • the selected triple image that satisfies the condition is then input into the CNN model (see FIG. 4), and the training is completed to obtain the trained Triplets. CNN model.
  • the Triplets CNN model ie, the Triplets deep convolutional neural network model
  • the Triplets CNN model is mainly composed of a convolution layer, a pooling layer, a fully connected layer (average pooling layer), and a Triplets loss layer; in general, the number of convolution layers is Different, can be adjusted according to actual needs, each convolution layer will be followed by a pooling layer for local averaging and sub-sampling, continuous alternating between convolution and sampling, and finally output by the fully connected layer.
  • the triplet ie, the ID image of the person to be tested, the visible image of the face, and the near-infrared image of the face
  • the triplets CNN network model After multiple calculations, the Triplets loss is as small as possible.
  • Convergence finally extracting the output of the last fully connected layer (average pooling layer) in the Triplets CNN when extracting the features of the face image, and extracting the ID card photos, near-infrared face images, according to the trained Triplets CNN model, Fully connected layer (average pooled layer) feature of visible light face images.
  • the first reference sample triplet image, the second reference sample triplet image, and the third reference sample triplet image are triads satisfying the condition.
  • the image, in the step of selecting a triple image satisfying the condition from the triple image according to the preset triplet selection condition includes:
  • Step S181 selecting any ID card image from the reference sample image as the first reference sample, selecting a face near-infrared light image farthest from the first reference sample from the same sample image, and extracting the heterogeneous sample image from the heterogeneous sample image. Selecting a face near-infrared light image closest to the first reference sample to generate a first reference sample triplet image;
  • Step S182 selecting any one of the face near-infrared light images from the reference sample image as the second reference sample, and selecting a face visible image that is the farthest from the second reference sample from the same sample image, and from the heterogeneous sample The image selects a visible light image of the face closest to the first reference sample to generate a second reference sample triplet image;
  • Step S183 selecting any one of the human visible light images as the third reference sample from the reference sample image, selecting an ID card image that is the farthest from the third reference sample from the same sample image, and selecting one from the heterogeneous sample image.
  • a third reference sample triplet image is generated from the ID image closest to the third reference sample.
  • the ID card image is recorded as IDPIC
  • the face near-infrared image is recorded as NIR
  • the visible light image is referred to as VIS.
  • IDPIC-NIR Select an ID card photo from the anchor sample image as the first reference sample. In the European space, select a face near-infrared from the nositive sample image that is the farthest from the ID card photo. The face image, which constitutes a hard anchor-nositive, selects a face near-infrared face image closest to the ID card photo from the negative sample image to form a hard anchor-negative, as shown in Figure 6a; wherein anchor, hard anchor- Nositive and hard anchor-negative form the first reference sample triplet image. Similarly, NIR-IDPIC is also shown above.
  • NIR-VIS Select a near-infrared image as the second reference sample from the anchor sample image, and select a visible light face image that is farthest from the near-infrared image from the positive sample image in the European space. Constructing a hard anchor-Positive, selecting a near-infrared image from the negative sample image to form a hard anchor-negative, as shown in Figure 6b; wherein anchor, hard anchor-nositive and hard anchor-negative constitute A second reference sample triplet image. Similarly, VIS-NIR is also shown above.
  • VIS-IDPIC Select a visible light image as the third reference sample from the anchor sample image, and select a photo of the ID card farthest from the visible image from the positive sample image in the European space to form a hard Anchor-positive, select a photo of the ID card closest to the visible image from the negative sample image to form a hard anchor-negative, as shown in Figure 6c; wherein anchor, hard anchor-nositive and hard anchor-negative form the third Refer to the sample triplet image.
  • IDPIC-VIS is also shown above.
  • the step of inputting the triplet image satisfying the condition into the CNN model for training to obtain the trained completed Triplets CNN model includes:
  • Step S191 input a triad image satisfying the condition into the CNN model for learning training of multiple convolution layers and pooling layers, and calculate a target loss function value of the Triplets, when the target loss function value of the Triplets converges, Get the trained Triplets CNN model.
  • the three-tuple image satisfying the condition in the figure is input into the CNN model for learning training of multiple convolution layers and pooling layers, and the target loss function value of the Triplets is calculated, and the target of the Triplets is When the loss function value converges, the trained Triplets CNN model is obtained.
  • the number of convolutional layers and pooling layers can be adjusted according to the requirements of actual image processing.
  • Each of the convolutional layers is followed by a pooling layer for local averaging and subsampling, alternating between convolution and sampling, and finally output by the fully connected layer.
  • the method includes:
  • Step S131 calculating the similarity between any two images in the ID image, the human face visible light image, and the human face near infrared light image according to the following formula:
  • Sim(I 1 , I 2 ) represents the similarity between the image with the feature vector I 1 and the image with the feature vector I 2
  • n is the dimension of the feature vector
  • f 1k is the kth element of the feature vector I 1
  • f 2k is the kth element of the feature vector I 2 .
  • the step of judging the consistency of the ID card image, the human face visible light image, and the face near-infrared light image according to the similarity, and outputting the identity authentication result Including:
  • Step S141 When the similarity between the ID image and the visible light image of the face or the similarity between the ID image and the near-infrared light image of the face is greater than a preset threshold, the identity authentication of the person to be authenticated is determined to pass.
  • the eigenvectors for calculating the ID card image, the human face visible light image, and the human face near-infrared light image are respectively calculated to calculate the ID card image, the human face visible light image, and the human face near-infrared light image. After the similarity between them, the decision output method is adopted:
  • T is the preset threshold
  • IDPIC indicates the ID card photo
  • NIR indicates the face near infrared light image
  • VIS indicates the face visible image
  • the present invention further provides a method for authenticating an identity authentication method.
  • the method for authenticating the identity authentication method of the present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments.
  • the figure is a schematic structural diagram of a method system for identity authentication of the present invention in one embodiment.
  • the method system for identity authentication in this embodiment includes:
  • the image obtaining module 10 is configured to obtain an ID image of the person to be authenticated, a visible light image of the face, and a near-infrared light image of the face.
  • the feature vector obtaining module 20 is configured to input the ID card image, the human face visible light image, and the face near infrared light image into the pre-completed Training Triplet CNN model, and extract the ID card image, the human face visible image, and the face near infrared light.
  • the convolution feature of the image yields the corresponding feature vector.
  • the similarity calculation module 30 is configured to calculate the similarities of any two of the ID image, the human visible light image, and the human near-infrared light image according to the feature vector.
  • the identity authentication determining module 40 is configured to determine the consistency of the ID image, the human face visible light image, and the face near infrared light image according to the similarity, and output the identity authentication result.
  • system for identity authentication further includes:
  • the quality evaluation module 50 is configured to perform quality assessment on the ID card image, the human face visible light image, and the face near infrared light image by using a quality assessment algorithm.
  • the image pre-processing module 60 is configured to perform image pre-processing on the ID card image, the human face visible light image, and the face near-infrared light image when the ID card image, the human face visible light image, and the face near-infrared light image quality meet the requirements.
  • system for identity authentication further includes:
  • the training set triplet building module 70 is configured to acquire a plurality of ID card images, a face visible light image corresponding to the ID card image, and a face near infrared light image in advance, and the plurality of identity images, the human face visible image, and the person
  • the face near-infrared light image constructs a triplet image of the training set, and the triplet image includes a reference sample image, a homogeneous sample image, and a heterogeneous sample image.
  • the triplet image selection module 80 is configured to select a triplet image that satisfies the condition from the triplet image according to the preset triplet selection condition;
  • the Triplets CNN model training module 90 is configured to input a three-tuple image satisfying the condition into the CNN model for training, and obtain a trained completed Triplets CNN model.
  • system for identity authentication further includes:
  • the similarity calculation module 30 is further configured to calculate the similarity between any two images in the ID image, the human visible light image, and the human near-infrared light image according to the following formula:
  • Sim(I 1 , I 2 ) represents the similarity between the image with the feature vector I 1 and the image with the feature vector I 2
  • n is the dimension of the feature vector
  • f 1k is the kth element of the feature vector I 1
  • f 2k is the kth element of the feature vector I 2 .
  • the first reference sample triplet image, the second reference sample triplet image, and the third reference sample triplet image are conditional triple images, triplet maps Like the selection module 80, it also includes:
  • the first reference sample triplet image generating module 81 is configured to select any one of the reference image images as the first reference sample, and select one of the same sample images from the same reference image as the farthest distance from the first reference sample. a near-infrared light image of the face, and selecting a near-infrared light image of the face closest to the first reference sample from the heterogeneous sample image to generate a first reference sample triplet image;
  • the second reference sample triplet image generating module 82 is configured to select any one of the face near-infrared light images from the reference sample image as the second reference sample, and select a distance from the same sample image to the second reference sample. The farthest visible light image of the face, and selecting a visible light image of the face closest to the first reference sample from the heterogeneous sample image to generate a second reference sample triplet image;
  • the third reference sample triplet image generating module 83 is configured to select any one of the human visible light images as the third reference sample from the reference sample image, and select one of the same sample images that is the farthest from the third reference sample.
  • the ID card image is selected, and an ID card image closest to the third reference sample is selected from the heterogeneous sample image to generate a third reference sample triplet image.
  • system for identity authentication further includes:
  • the Triplets CNN model training module 90 is further configured to input a three-tuple image satisfying the condition into the CNN model for learning training of multiple convolution layers and pooling layers, and calculate a target loss function value of the Triplets, in the Triplets. When the target loss function value converges, the trained Triplets CNN model is obtained.
  • the method for authenticating the above-mentioned identity authentication system can perform the method for identity authentication provided by the embodiment of the present invention, and has the corresponding functional modules and beneficial effects of the execution method.
  • each function module for example, the image obtaining module 10, the feature vector obtaining module 20, the similarity calculating module 30, the identity authentication determining module 40, the Triplets CNN model training module 90, etc., refer to the above method embodiment. The description is not repeated here.
  • the present invention further provides a computer readable storage medium, and the computer readable storage medium of the present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments.
  • the computer readable storage medium in the embodiment of the present invention has a computer program stored thereon, and when the program is executed by the processor, all the method steps in the method embodiment of the present invention can be implemented.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
  • the computer readable storage medium is used to store a program (instruction) of the method for identity authentication provided by the embodiment of the present invention, wherein the method for executing the identity authentication provided by the embodiment of the present invention can be performed, and the corresponding beneficial effect of the execution method is provided. .
  • the present invention further provides an apparatus for identity authentication.
  • the computer apparatus of the present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments.
  • FIG. 9 is a schematic structural diagram of an apparatus for identity authentication according to an embodiment of the present invention.
  • the device for identity authentication in this embodiment includes a binocular camera 901, an ID card image capture device 902, a processor 903, and a display screen 904;
  • the binocular camera 901 is configured to collect a visible light image of a face and a near-infrared light image of a face of the person to be authenticated;
  • the ID card image collection device 902 is configured to collect an ID image of the person to be authenticated
  • the processor 903 is configured to perform the following steps:
  • the ID card image, the visible light image of the face and the near-infrared light image of the face are input into the Triplets CNN model of the pre-completed training, and the convolution characteristics of the ID card image, the visible light image of the face and the near-infrared light image of the face are extracted, and the corresponding correspondence is obtained.
  • Feature vector
  • Display 904 is used to display the authentication result.
  • the above-mentioned identity authentication device can simultaneously collect the visible light image of the face and the near-infrared light image of the face of the person to be authenticated by using the binocular camera image, and collect the image of the identity card of the person to be authenticated by using the ID card image capturing device, and then collect the image of the identity of the person to be authenticated.
  • the processor calculates the similarity of any two images in the ID card image, the human face visible light image and the human face near-infrared light image by using the Triplets CNN model, and the ID image, the human face visible image and the human face near-infrared light according to the similarity degree The consistency of the image is judged, and the identity authentication result is output.
  • the identity authentication device is convenient to use and can quickly perform identity authentication.
  • the device for identity authentication includes a housing 100, a display screen 200 mounted on the side of the housing 100, a binocular camera 300 mounted on the upper side of the side of the housing 100, and mounted on the housing.
  • the binocular camera 300 is disposed at the upper end of the side of the casing 100 to facilitate the collection of the face image of the authenticated person
  • the ID card image capturing device 400 is disposed at the lower side of the casing 100 to facilitate the identification of the identity card by the authorized personnel. And collecting the ID card image, the identity authentication device has a simple structure and is convenient to use.
  • the display 200 is a touch display. It is convenient for operation of the identity authentication device, such as selecting a re-collection of a face image.

Abstract

本发明涉及一种身份认证的方法,其方法包括以下步骤:获取待认证人员的身份证图像、人脸可见光图像和人脸近红外光图像;将身份证图像、人脸可见光图像和人脸近红外光图像输入至预先完成训练的 Triplets CNN 模型中,提取身份证图像、人脸可见光图像和人脸近红外光图像的卷积特征,得到对应的特征向量;根据特征向量分别计算身份证图像、人脸可见光图像和人脸近红外光图像中任意两个图像的相似度;根据相似度对身份证图像、人脸可见光图像和人脸近红外光图像的一致性进行判断,并输出身份认证结果。本发明中将 Triplets 和深度卷积神经网络 CNN 模型有效结合起来,能有效提高身份认证的鲁棒性,进而体改身份认证的准确率。

Description

身份认证的方法、系统和装置 技术领域
本发明涉及模式识别和人工智能技术领域,特别是涉及一种身份认证的方法、系统和装置。
背景技术
当今,信息安全越来越受到人们的重视,为了确保信息不被没有访问权限的人修改或随意伪造信息,对人们进行身份认证就显得尤为重要。目前,在日常生活中的许多场合(例如在生活、旅游、工作等)都需要出示和核实身份证件,并根据身份证件进行人员身份的认证,例如银行业务办理、搭乘高铁或飞机等交通工具、入住酒店等等。在验证过程中,主要核实相关信息,以确保持证人和证件之间保持一致性,即“人证合一”。
目前,常用的身份认证方法主要有两种,一种是先将身份证在公安互联网的数据库中进行基本信息的核查,在通过人工将持证人与身份证上的信息对比,该验证方法效率比较低且识别准确率低。另一种方法是利用身份认证装置,同时采集身份证图像,并利用可见光捕捉持证人的图像,然后将采集的身份证图像和持证人图像进行图像比对,在比对结果一致时,验证通过。
然而,在身份认证过程时,利用可见光采集图像的过程中,光照、姿态和表情等对图像采集影响比较大,导致图像分别率低,从而造成身份认证的准确率差。
发明内容
基于此,有必要针对现有身份认证方法准确率差的问题,提供一种身份认证的方法和系统。
一种身份认证的方法系统,包括:
获取待认证人员的身份证图像、人脸可见光图像和人脸近红外光图像;
将所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像输入至预先完成训练的Triplets CNN模型中,提取所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的卷积特征,得到对应的特征向量;
根据所述特征向量分别计算所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像中任意两个图像的相似度;
根据所述相似度对所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的一致性进行判断,并输出身份认证结果。
一种身份认证的方法系统,包括:
图像获取模块,用于获取待认证人员的身份证图像、人脸可见光图像和人脸近红外光图像;
特征向量获得模块,用于将所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像输入至预先完成训练的Triplets CNN模型中,提取所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的卷积特征,得到对应的特征向量;
相似度计算模块,用于根据所述特征向量分别计算所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像中任意两个图像的相似度;
身份认证判断模块,用于根据所述相似度对所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的一致性进行判断,并输出身份认证结果。
一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现以下的步骤:
获取待认证人员的身份证图像、人脸可见光图像和人脸近红外光图像;
将所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像输入至预先完成训练的Triplets CNN模型中,提取所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的卷积特征,得到对应的特征向量;
根据所述特征向量分别计算所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像中任意两个图像的相似度;
根据所述相似度对所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的一致性进行判断,并输出身份认证结果。
一种身份认证的装置,其特征在于,包括外壳体、装设于所述外壳体正面侧的显示屏、装设于所述外壳体顶面侧的双目摄像头、装设于外壳体底面侧的身份证图像采集装置以及装设于所述外壳体内部的处理器;
所述双目摄像头用于采集待认证人员的人脸可见光图像和人脸近红外光图像;
所述身份证图像采集装置用于采集待认证人员的身份证图像;
所述处理器用于执行以下步骤:
获取待认证人员的身份证图像、人脸可见光图像和人脸近红外光图像;
将所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像输入至预先完成训练的Triplets CNN模型中,提取所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的卷积特征,得到对应的特征向量;
根据所述特征向量分别计算所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像中任意两个图像的相似度;
根据所述相似度对所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的一致性进行判断,并输出身份认证结果;
所述显示屏用于显示所述认证结果。
本发明中获取待认证人员的身份证图像、人脸可见光图像和人脸近红外光图像,然后将采集的身份证图像、人脸可见光图像和人脸近红外光图像输入至预先完成训练的Triplets CNN模型中,提取身份证图像、人脸可见光图像和人脸近红外光图像的卷积特征,得到对应的特征向量;根据特征向量分别计算所述身份证图像、人脸可见光图像和人脸近红外光图像中任意两个图像的相似度;根据相似度对身份证图像、人脸可见光图像和人脸近红外光图像的一致性进行判断,并输出身份认证结果。本发明中将Triplets和深度卷积神经网络CNN模型有效结合起来,能提高身份认证的鲁棒性,进而提高身份认证的准确率。
附图说明
图1为本发明的身份认证的方法在其中一个实施例中的流程示意图;
图2为本发明的身份认证的方法在其中一个实施例中的流程示意图;
图3为本发明的身份认证的方法在其中一个实施例中的流程示意图;
图4为本发明的身份认证的方法中Triplets CNN模型的示意图;
图5为本发明的身份认证的方法在其中一个实施例中的流程示意图;
图6a、图6b和图6c为本发明的身份认证的方法中根据预设的三元组选择条件从所述三元组图像中选取满足条件的三元组图像的示意图;
图7为本发明的身份认证的方法在其中一个实施例中的流程示意图;
图8为本发明的身份认证的方法在其中一个实施例中的流程示意图;
图9为本发明的身份认证的装置在其中一个实施例中的流程示意图;
图10为本发明的身份认证的装置在一个实施例中的结构示意图。
具体实施方式
下面将结合较佳实施例及附图对本发明的内容作进一步详细描述。显然,下文所描述的实施例仅用于解释本发明,而非对本发明的限定。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。应当说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部内容。
图1为本发明的身份认证的方法在一个实施例中的流程示意图,如图1所示,本发明实施例中的身份认证的方法,包括以下步骤:
步骤S110,获取待认证人员的身份证图像、人脸可见光图像和人脸近红外 光图像。
在本实施例中所述待认证人员为任意的需要接收身份认证的人。在进行图像认证时,采用双目摄像头同时采集待认证人员的人脸可见光图像和人脸近红外光图像,然后将采集到的人脸可见光图像和人脸近红外光图像通过人脸检测器检测出人脸区域。另外,待认证人员的身份证图像可以利用读卡装置直接获得或者通过扫描装置扫描待认证人员的身份证获得。
应当理解,待认证人员的身份证图像、人脸可见光图像和人脸近红外光图像要可以其他常用的图像采集方法而获得。
步骤S120,将身份证图像、人脸可见光图像和人脸近红外光图像输入至预先完成训练的Triplets CNN模型中,提取身份证图像、人脸可见光图像和人脸近红外光图像的卷积特征,得到对应的特征向量。
具体地,在进行身份认证时,首选要先采集获得待测人员的身份证图像、人脸可见光图像和人脸近红外光图像,然后将这些图像输入至预先训练完成的Triplets CNN模型,利用Triplets和CNN相关算法,分别提取身份证图像、人脸可见光图像和人脸近红外光图像的卷积特征,得到这些图像中各图像的特征向量,即通过Triplets CNN模型,分别计算出身份证图像、人脸可见光图像和人脸近红外光图像的特征向量。
步骤S130,根据特征向量分别计算身份证图像、人脸可见光图像和人脸近红外光图像中任意两个图像的相似度。
步骤S140,根据相似度对身份证图像、人脸可见光图像和人脸近红外光图像的一致性进行判断,并输出身份认证结果。
在本实施例中,利用计算出身份证图像、人脸可见光图像和人脸近红外光图像的特征向量分别计算出身份证图像、人脸可见光图像和人脸近红外光图像两两之间的相似度,然后根据相似度的结果,对身份证图像、人脸可见光图像和人脸近红外光图像的一致性进行判断,并输出身份认证结果。其中身份认证结果可以是待测人的身份证图像与人脸图像是一致的,即“认证合一”;也可以是待测人的身份证图像与人脸图像是不一致的。
上述的身份认证的方法,首先采集待认证人员的身份证图像、人脸可见光图像和人脸近红外光图像,然后将采集的身份证图像、人脸可见光图像和人脸近红外光图像输入至预先完成训练的Triplets CNN模型中,提取身份证图像、人脸可见光图像和人脸近红外光图像的卷积特征,得到对应的特征向量;根据特征向量分别计算所述身份证图像、人脸可见光图像和人脸近红外光图像中任意两个图像的相似度;根据相似度对身份证图像、人脸可见光图像和人脸近红外光图像的一致性进行判断,并输出身份认证结果。本发明中将Triplets和深度 卷积神经网络CNN模型有效结合起来,能有效提高身份认证的鲁棒性,进而提高身份认证的准确率。
在其中一种实施例中,如图2所示,在将身份证图像、人脸可见光图像和人脸近红外光图像输入至预先完成训练的Triplets CNN模型中,提取身份证图像、人脸可见光图像和人脸近红外光图像的卷积特征,得到对应的特征向量的步骤之前,还包括:
步骤S150,利用质量评估算法对身份证图像、人脸可见光图像和人脸近红外光图像进行质量评估。
具体地,由于在现场采集人脸图像(人脸可见光图像和人脸近红外光图像)时,图像质量容易受环境光照(强度或弱光)、摄像头采集图像时焦距对准等多因素的影响,一次采集到的图像质量差(如分辨率低等),或者身份证采集装置一次采集的图像模糊不清等,这些图像都会影响后期的身份认证的准确性。因此,在采集到人脸图像和身份证图像后,要对图像进行质量评估,看其是否适用于后期的身份认证。如果图像质量差不能用于后期身份认证,就重新采集人脸图像和身份证图像。在本实施例中,利用质量评估算法来对现场采集的人脸图像和身份证进行质量评估。另外,在采集到人脸图像后,还可以对其进行活体检测,以确保采集到的人脸图像是来自待测人员现场采集的活体图像,而不是利用待测人员的照片等采集到的图像。
步骤S160,在身份证图像、人脸可见光图像和人脸近红外光图像质量满足要求时,对身份证图像、人脸可见光图像和人脸近红外光图像进行图像预处理。
在本实施例中,在身份证图像、人脸可见光图像和人脸近红外光图像质量满足要求时,对身份证图像、人脸可见光图像和人脸近红外光图像进行预处理,例如对身份证图像、人脸可见光图像和人脸近红外光图像进行人脸对其处理、对身份证图像、人脸可见光图像和人脸近红外光图像进行分辨率处理,使其分辨率保持一致等。对这些图像进行预处理,能有效增强提取的图像特征的准确率,进而增加身份认证的准确性。
在其中一种实施例中,如图3所示,预先完成训练的Triplets CNN模型通过以下步骤获得:
步骤S170,预先获取多幅身份证图像、与身份证图像对应的人脸可见光图像和人脸近红外光图像,并以多幅身份中图像、人脸可见光图像和人脸近红外光图像构建训练集的三元组图像,三元组图像包括参考样本图像、同类样本图像和异类样本图像。
具体地,三元组是基于度量学习的Triplet loss函数中的概念,一个三元组Triplets由anchor样本(即参考样本),positive样本(即同类样本),negative样 本构成(即异类样本),所谓的三元组就是三个样例,如(anchor,positive,negative),其中,a和p是同一类,a和n是不同类。那么学习的过程就是学到一种表示,对于尽可能多的三元组,使得anchor和positive的距离小于anchor和negative的距离。即:
Figure PCTCN2018093787-appb-000001
其中x i a表示参考样本,x i p表示同类样本,x i n表示异类样本,α表示特定阈值,在0.0~1.0之间,建议值为0.2。不等式本质上定义了同类样本和异类样本之间的距离关系,即:所有同类样本之间的距离+阈值α要小于异类样本之间的距离。将上述公式进行转换,得到基于Triplets的目标函数:
Figure PCTCN2018093787-appb-000002
目标函数的含义就是对于不满足条件的三元组,进行优化;对于满足条件的三元组,就先不管。
在本实施例中,首先采集多幅身份证图像、与身份证图像对应的人脸可见光图像和人脸近红外光图像,以多幅身份中图像、人脸可见光图像和人脸近红外光图像构建训练集的三元组图像。
步骤S180,根据预设的三元组选择条件从三元组图像中选取满足条件的三元组图像。
在本实施例中,预设的三元组选择条件可以是以身份证图像、人脸可见光图像和人脸近红外光图像中任意一种图像中的一张图像作为参考样本,以其他种图像为同类样本或异类样本,选择同类样本中与参考样本距离最远的图像和异类样本中与参考样本距离最近的图像,生成满足条件的三元组图像。另外,三元组选择条件可以根据身份认证过程中实际的需求进行设计,选择方式并不唯一。
步骤S190,将满足条件的三元组图像输入至CNN模型中进行训练,得到训练完成的Triplets CNN模型。
在本实施例中,将筛选出来的满足条件的三元组图像,然后将这些满足条件的三元组图像输入至CNN模型中(如图4),进行训练就可得到训练完成的Triplets CNN模型。其中Triplets CNN模型(即Triplets深度卷积神经网络模型)主要是由卷积层、池化层、全连接层(平均池化层)、Triplets loss层组成;一般情况下卷积层的个数是不同的,可以根据实际需求进行调整,每个卷积层后面会紧跟一个池化层用于局部平均和子抽样,在卷积和抽样之间的连续交替,最后由全连接层进行输出。在身份认证过程将三元组(即待测人员的身份证图像、 人脸可见光图像和人脸近红外光图像)放入Triplets CNN网络模型中,经过多次运算,使Triplets loss尽可能小且收敛,最后在抽取人脸图像的特征时,提取了Triplets CNN中最后一个全连接层(平均池化层)的输出,根据训练的Triplets CNN模型,分别提取身份证照片、近红外人脸图像、可见光人脸图像的全连接层(平均池化层)特征。
在其中一种实施例中,如图5所示,第一参考样本三元组图像、第二参考样本三元组图像和第三参考样本三元组图像为满足条件的三元组图像,根据预设的三元组选择条件从三元组图像中选取满足条件的三元组图像的步骤中,包括:
步骤S181,从参考样本图像中选取任意一张身份证图像作为第一参考样本,从同类样本图像中选取一张与第一参考样本距离最远的人脸近红外光图像,并从异类样本图像选取一张与第一参考样本距离最近的人脸近红外光图像,生成第一参考样本三元组图像;
步骤S182,从参考样本图像中选取任意一张人脸近红外光图像作为第二参考样本,从同类样本图像中选取一张与第二参考样本距离最远的人脸可见光图像,并从异类样本图像选取一张与第一参考样本距离最近的人脸可见光图像,生成第二参考样本三元组图像;
步骤S183,从参考样本图像中选取任意一张人可见光图像作为第三参考样本,从同类样本图像中选取一张与第三参考样本距离最远的身份证图像,并从异类样本图像选取一张与第三参考样本距离最近的身份证图像,生成第三参考样本三元组图像。
在本实施例中,将身份证图像记为IDPIC、人脸近红外图像记为NIR、可见光图像简称记为VIS。根据预设的三元组选择条件从三元组图像中选取满足条件的三元组图像的步骤如下:
(1)IDPIC-NIR:从anchor样本图像中选取一张身份证照片作为第一参考样本,在欧式空间里,从nositive样本图像中选取一张与身份证照片距离最远的人脸近红外人脸图像,构成hard anchor-nositive,从negative样本图像中选取一张与身份证照片距离最近的人脸近红外人脸图像,构成hard anchor-negative,如图6a所示;其中anchor、hard anchor-nositive和hard anchor-negative构成了第一参考样本三元组图像。同理,NIR-IDPIC也如上所示。
(2)NIR-VIS:从anchor样本图像中选取一张近红外图像作为第二参考样本,在欧式空间里,从positive样本图像中选取一张与近红外图像距离最远的可见光人脸图像,构成hard anchor-Positive,从negative样本图像中选取一张近红外图像距离最近的可见光人脸图像,构成hard anchor-negative,如图6b所示; 其中anchor、hard anchor-nositive和hard anchor-negative构成了第二参考样本三元组图像。同理,VIS-NIR也如上所示。
(3)VIS-IDPIC:从anchor样本图像中选取选取一张可见光图像作为第三参考样本,在欧式空间里,从positive样本图像中选取一张与可见光图像距离最远的身份证照片,构成hard anchor-positive,从negative样本图像中选取一张与可见光图像距离最近的身份证照片,构成hard anchor-negative,如图6c所示;其中anchor、hard anchor-nositive和hard anchor-negative构成了第三参考样本三元组图像。同理,IDPIC-VIS也如上所示。
在其中一种实施例中,在将满足条件的三元组图像输入至CNN模型中进行训练,得到训练完成的Triplets CNN模型的步骤中,包括:
步骤S191,将满足条件的三元组图像输入至CNN模型中进行多个卷积层和池化层的学习训练,并计算Triplets的目标损失函数值,在Triplets的目标损失函数值收敛时,得到训练完成的Triplets CNN模型。
在本实施例中,将图中的满足条件的三元组图像输入至CNN模型中进行多个卷积层和池化层的学习训练,并计算Triplets的目标损失函数值,在Triplets的目标损失函数值收敛时,得到训练完成的Triplets CNN模型。其中卷积层和池化层的个数可以根据实际图像处理时的需求进行调整。其中每个卷积层后面会紧跟一个池化层用于局部平均和子抽样,在卷积和抽样之间的连续交替,最后由全连接层进行输出。
在其中一种实施例中,如图7所示,在根据特征向量分别计算身份证图像、人脸可见光图像和人脸近红外光图像中任意两个图像的相似度的步骤中,包括:
步骤S131,根据以下公式计算身份证图像、人脸可见光图像和人脸近红外光图像中任意两个图像的相似度:
Figure PCTCN2018093787-appb-000003
sim(I 1,I 2)表示特征向量为I 1的图像与特征向量为I 2的图像的相似度,n是特征向量的维数,f 1k是特征向量I 1的第k个元素,f 2k是特征向量I 2的第k个元素。
具体地,在其中一种实施例中,如图7所示,在根据相似度对身份证图像、人脸可见光图像和人脸近红外光图像的一致性进行判断,并输出身份认证结果的步骤中,包括:
步骤S141,在身份证图像与人脸可见光图像的相似度或身份证图像与人脸近红外光图像的相似度大于预设的阈值时,判定待认证人员的身份认证通过。
在本实施例中,在计算出利用计算出身份证图像、人脸可见光图像和人脸 近红外光图像的特征向量分别计算出身份证图像、人脸可见光图像和人脸近红外光图像两两之间的相似度之后,采用决策输出方式:
Figure PCTCN2018093787-appb-000004
若result为1,则代表身份证照片、人脸可见光图像、人脸近红外光图像的身份一致,身份认证通过,反之为0则代表三张图像的身份不一致,身份认证不通过。其中T为预设的阈值,IDPIC表示身份证照片、NIR表示人脸近红外光图像、VIS表示人脸可见光图像。
根据上述本发明的身份认证的方法,本发明还提供一种身份认证的方法系统,下面结合附图及较佳实施例对本发明的身份认证的方法系统进行详细说明。
图为本发明的身份认证的方法系统在一个实施例中的结构示意图。如图8所示,该实施例中的身份认证的方法系统,包括:
图像获取模块10,用于获取待认证人员的身份证图像、人脸可见光图像和人脸近红外光图像。
特征向量获得模块20,用于将身份证图像、人脸可见光图像和人脸近红外光图像输入至预先完成训练的Triplets CNN模型中,提取身份证图像、人脸可见光图像和人脸近红外光图像的卷积特征,得到对应的特征向量。
相似度计算模块30,用于根据特征向量分别计算身份证图像、人脸可见光图像和人脸近红外光图像中任意两个图像的相似度。
身份认证判断模块40,用于根据相似度对身份证图像、人脸可见光图像和人脸近红外光图像的一致性进行判断,并输出身份认证结果。
在其中一个实施例中,身份认证的系统,还包括:
质量评估模块50,用于利用质量评估算法对身份证图像、人脸可见光图像和人脸近红外光图像进行质量评估。
图像预处理模块60,用于在身份证图像、人脸可见光图像和人脸近红外光图像质量满足要求时,对身份证图像、人脸可见光图像和人脸近红外光图像进行图像预处理。
在其中一个实施例中,身份认证的系统,还包括:
训练集三元组构建模块70,用于预先获取多幅身份证图像、与身份证图像对应的人脸可见光图像和人脸近红外光图像,以多幅身份中图像、人脸可见光图像和人脸近红外光图像构建训练集的三元组图像,三元组图像包括参考样本图像、同类样本图像和异类样本图像。
三元组图像选择模块80,用于根据预设的三元组选择条件从三元组图像中选取满足条件的三元组图像;
Triplets CNN模型训练模块90,用于将满足条件的三元组图像输入至CNN模型中进行训练,得到训练完成的Triplets CNN模型。
在其中一个实施例中,身份认证的系统,还包括:
相似度计算模块30,还用于根据以下公式计算身份证图像、人脸可见光图像和人脸近红外光图像中任意两个图像的相似度:
Figure PCTCN2018093787-appb-000005
sim(I 1,I 2)表示特征向量为I 1的图像与特征向量为I 2的图像的相似度,n是特征向量的维数,f 1k是特征向量I 1的第k个元素,f 2k是特征向量I 2的第k个元素。
在其中一个实施例中,第一参考样本三元组图像、第二参考样本三元组图像和第三参考样本三元组图像为满足条件的三元组图像,三元组图像选择模块80,还包括:
第一参考样本三元组图像生成模块81,用于从参考样本图像中选取任意一张身份证图像作为第一参考样本,从同类样本图像中选取一张与第一参考样本距离最远的人脸近红外光图像,并从异类样本图像选取一张与第一参考样本距离最近的人脸近红外光图像,生成第一参考样本三元组图像;
第二参考样本三元组图像生成模块82,用于从参考样本图像中选取任意一张人脸近红外光图像作为第二参考样本,从同类样本图像中选取一张与第二参考样本距离最远的人脸可见光图像,并从异类样本图像选取一张与第一参考样本距离最近的人脸可见光图像,生成第二参考样本三元组图像;
第三参考样本三元组图像生成模块83,用于从参考样本图像中选取任意一张人可见光图像作为第三参考样本,从同类样本图像中选取一张与第三参考样本距离最远的身份证图像,并从异类样本图像选取一张与第三参考样本距离最近的身份证图像,生成第三参考样本三元组图像。
在其中一个实施例中,身份认证的系统,还包括:
Triplets CNN模型训练模块90,还用于将满足条件的三元组图像输入至CNN模型中进行多个卷积层和池化层的学习训练,并计算Triplets的目标损失函数值,在Triplets的目标损失函数值收敛时,得到训练完成的Triplets CNN模型。
上述身份认证的方法系统可执行本发明实施例所提供的身份认证的方法,具备执行方法相应的功能模块和有益效果。至于其中各个功能模块所执行的处理方法,例如图像获取模块10、特征向量获得模块20、相似度计算模块30、身份认证判断模块40、Triplets CNN模型训练模块90等,可参照上述方法实施例 中的描述,此处不再进行赘述。
根据上述本发明的身份认证的方法和系统备,本发明还提供一种计算机可读存储介质,下面结合附图及较佳实施例对本发明的计算机可读存储介质进行详细说明。
本发明实施例中的计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时可以实现本发明方法实施例中的所有方法步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等”。
上述计算机可读存储介质用于存储本发明实施例所提供的身份认证的方法的程序(指令),其中执行该程序可以执行本发明实施例所提供的身份认证的方法,具备执行方法相应有益效果。可参照上述方法实施例中的描述,此处不再进行赘述。
根据上述本发明的身份认证的方法和系统,本发明还提供一种身份认证的装置,下面结合附图及较佳实施例对本发明的计算机设备进行详细说明。
图9为本发明的身份认证的装置在一个实施例中的结构示意图。如图9所示,该实施例中的身份认证的装置,包括双目摄像头901、身份证图像采集装置902、处理器903以及显示屏904、;
双目摄像头901用于采集待认证人员的人脸可见光图像和人脸近红外光图像;
身份证图像采集装置902用于采集待认证人员的身份证图像;
处理器903用于执行以下步骤:
获取待认证人员的身份证图像、人脸可见光图像和人脸近红外光图像;
将身份证图像、人脸可见光图像和人脸近红外光图像输入至预先完成训练的Triplets CNN模型中,提取身份证图像、人脸可见光图像和人脸近红外光图像的卷积特征,得到对应的特征向量;
根据特征向量分别计算身份证图像、人脸可见光图像和人脸近红外光图像中任意两个图像的相似度;
根据相似度对身份证图像、人脸可见光图像和人脸近红外光图像的一致性进行判断,并输出身份认证结果;
显示屏904用于显示身份认证结果。
上述的身份认证装置可以利用双目摄像图同时采集待认证人员的人脸可见 光图像和人脸近红外光图像,利用身份证图像采集装置采集待认证人员的身份证图像,然后将采集到的图像处理器,利用Triplets CNN模型计算身份证图像、人脸可见光图像和人脸近红外光图像中任意两个图像的相似度,根据相似度对身份证图像、人脸可见光图像和人脸近红外光图像的一致性进行判断,并输出身份认证结果。该身份认证装置使用方便,能快速进行身份认证。
在一种具体的实施例中,身份认证的装置,包括壳体100、装设于壳体100侧面的显示屏200、装设于壳体100侧面上端的双目摄像头300、装设于壳体100侧面下端身份证图像采集装置400以及装设于壳体100内部的处理器(图中未示出)。该身份认证装置中将双目摄像头300设置于壳体100侧面上端方便对待认证人员人脸图像的采集,将身份证图像采集装置400设置与于壳体100侧面下端方便待认证人员放置身份证,并采集身份证图像,该身份认证装置结构简单,方便使用。
进一步地,显示屏200为触摸显示屏。方便用于对身份认证装置操作,例如选择人脸图像的重新采集等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。

Claims (10)

  1. 一种身份认证的方法,其特征在于,包括以下步骤:
    获取待认证人员的身份证图像、人脸可见光图像和人脸近红外光图像;
    将所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像输入至预先完成训练的Triplets CNN模型中,提取所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的卷积特征,得到对应的特征向量;
    根据所述特征向量分别计算所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像中任意两个图像的相似度;
    根据所述相似度对所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的一致性进行判断,并输出身份认证结果。
  2. 根据权利要求1所述的身份认证的方法,其特征在于,在将所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像输入至预先完成训练的Triplets CNN模型中,提取所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的卷积特征,得到对应的特征向量的步骤之前,还包括:
    利用质量评估算法对所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像进行质量评估;
    在所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像质量满足要求时,对所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像进行图像预处理。
  3. 根据权利要求1所述的身份认证的方法,其特征在于,所述预先完成训练的Triplets CNN模型通过以下步骤获得:
    预先获取多幅身份证图像、与所述身份证图像对应的人脸可见光图像和人脸近红外光图像,并以多幅所述身份中图像、所述人脸可见光图像和所述人脸近红外光图像构建训练集的三元组图像,所述三元组图像包括参考样本图像、同类样本图像和异类样本图像;
    根据预设的三元组选择条件从所述三元组图像中选取满足条件的三元组图像;
    将所述满足条件的三元组图像输入至CNN模型中进行训练,得到训练完成的Triplets CNN模型。
  4. 根据权利要求3所述的身份认证的方法,其特征在于,所述第一参考样本三元组图像、所述第二参考样本三元组图像和所述第三参考样本三元组图像为满足条件的三元组图像,根据预设的三元组选择条件从所述三元组图像中选取满足条件的三元组图像的步骤中,包括:
    从所述参考样本图像中选取任意一张身份证图像作为第一参考样本,从所 述同类样本图像中选取一张与所述第一参考样本距离最远的人脸近红外光图像,并从所述异类样本图像选取一张与所述第一参考样本距离最近的人脸近红外光图像,生成第一参考样本三元组图像;
    从所述参考样本图像中选取任意一张人脸近红外光图像作为第二参考样本,从所述同类样本图像中选取一张与所述第二参考样本距离最远的人脸可见光图像,并从所述异类样本图像选取一张与所述第一参考样本距离最近的人脸可见光图像,生成第二参考样本三元组图像;
    从所述参考样本图像中选取任意一张人可见光图像作为第三参考样本,从所述同类样本图像中选取一张与所述第三参考样本距离最远的身份证图像,并从所述异类样本图像选取一张与所述第三参考样本距离最近的身份证图像,生成第三参考样本三元组图像。
  5. 根据权利要求3所述的身份认证的方法,其特征在于,在将所述满足条件的三元组图像输入至CNN模型中进行训练,得到训练完成的Triplets CNN模型的步骤中,包括:
    将所述满足条件的三元组图像输入至CNN模型中进行多个卷积层和池化层的学习训练,并计算Triplets的目标损失函数值,在所述Triplets的目标损失函数值收敛时,得到训练完成的Triplets CNN模型。
  6. 根据权利要求1所述的身份认证的方法,其特征在于,在根据所述特征向量分别计算所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像中任意两个图像的相似度的步骤中,包括:
    根据以下公式计算所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像中任意两个图像的相似度:
    Figure PCTCN2018093787-appb-100001
    sim(I 1,I 2)表示特征向量为I 1的图像与特征向量为I 2的图像的相似度,n是特征向量的维数,f 1k是特征向量I 1的第k个元素,f 2k是特征向量I 2的第k个元素。
  7. 根据权利要求1或6所述的身份认证的方法,其特征在于,在根据所述相似度对所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的一致性进行判断,并输出身份认证结果的步骤中,包括:
    在所述身份证图像与所述人脸可见光图像的相似度或所述身份证图像与所述人脸近红外光图像的相似度大于预设的阈值时,判定所述待认证人员的身份认证通过。
  8. 一种身份认证的方法系统,其特征在于,包括:
    图像获取模块,用于获取待认证人员的身份证图像、人脸可见光图像和人脸近红外光图像;
    特征向量获得模块,用于将所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像输入至预先完成训练的Triplets CNN模型中,提取所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的卷积特征,得到对应的特征向量;
    相似度计算模块,用于根据所述特征向量分别计算所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像中任意两个图像的相似度;
    身份认证判断模块,用于根据所述相似度对所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的一致性进行判断,并输出身份认证结果。
  9. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1-7所述方法的步骤。
  10. 一种身份认证的装置,其特征在于,包括显示屏、双目摄像头、身份证图像采集装置以及处理器;
    所述双目摄像头用于采集待认证人员的人脸可见光图像和人脸近红外光图像;
    所述身份证图像采集装置用于采集待认证人员的身份证图像;
    所述处理器用于执行以下步骤:
    获取待认证人员的身份证图像、人脸可见光图像和人脸近红外光图像;
    将所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像输入至预先完成训练的Triplets CNN模型中,提取所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的卷积特征,得到对应的特征向量;
    根据所述特征向量分别计算所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像中任意两个图像的相似度;
    根据所述相似度对所述身份证图像、所述人脸可见光图像和所述人脸近红外光图像的一致性进行判断,并输出身份认证结果;
    所述显示屏用于显示所述身份认证结果。
PCT/CN2018/093787 2017-08-01 2018-06-29 身份认证的方法、系统和装置 WO2019024636A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710647019.9A CN107577987A (zh) 2017-08-01 2017-08-01 身份认证的方法、系统和装置
CN201710647019.9 2017-08-01

Publications (1)

Publication Number Publication Date
WO2019024636A1 true WO2019024636A1 (zh) 2019-02-07

Family

ID=61035777

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/093787 WO2019024636A1 (zh) 2017-08-01 2018-06-29 身份认证的方法、系统和装置

Country Status (2)

Country Link
CN (1) CN107577987A (zh)
WO (1) WO2019024636A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414454A (zh) * 2019-07-31 2019-11-05 南充折衍智能光电科技有限公司 一种基于机器视觉的人证合一识别系统
CN110956080A (zh) * 2019-10-14 2020-04-03 北京海益同展信息科技有限公司 一种图像处理方法、装置、电子设备及存储介质
CN111008550A (zh) * 2019-09-06 2020-04-14 上海芯灵科技有限公司 基于Multiple loss损失函数的指静脉验证身份的识别方法
CN112699803A (zh) * 2020-12-31 2021-04-23 竹间智能科技(上海)有限公司 人脸识别方法、系统、设备及可读存储介质
CN113158712A (zh) * 2020-08-07 2021-07-23 西安天和防务技术股份有限公司 人员管理系统
CN114049289A (zh) * 2021-11-10 2022-02-15 合肥工业大学 基于对比学习与StyleGAN2的近红外-可见光人脸图像合成方法

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107577987A (zh) * 2017-08-01 2018-01-12 广州广电卓识智能科技有限公司 身份认证的方法、系统和装置
CN108197563B (zh) * 2017-12-29 2022-03-11 百度在线网络技术(北京)有限公司 用于获取信息的方法及装置
CN108416326B (zh) * 2018-03-27 2021-07-16 百度在线网络技术(北京)有限公司 人脸识别方法和装置
CN108460366A (zh) * 2018-03-27 2018-08-28 百度在线网络技术(北京)有限公司 身份认证方法和装置
CN108446666A (zh) * 2018-04-04 2018-08-24 平安科技(深圳)有限公司 双通道神经网络模型训练及人脸比对方法、终端及介质
CN108961485A (zh) * 2018-05-07 2018-12-07 金联汇通信息技术有限公司 智能门锁、身份验证方法及装置
CN108922542B (zh) * 2018-06-01 2023-04-28 平安科技(深圳)有限公司 样例三元组的获取方法、装置、计算机设备以及存储介质
CN109145991B (zh) * 2018-08-24 2020-07-31 北京地平线机器人技术研发有限公司 图像组生成方法、图像组生成装置和电子设备
CN109325448A (zh) * 2018-09-21 2019-02-12 广州广电卓识智能科技有限公司 人脸识别方法、装置和计算机设备
CN109089052B (zh) * 2018-10-18 2020-09-01 浙江宇视科技有限公司 一种目标物体的验证方法及装置
CN109753934A (zh) * 2019-01-09 2019-05-14 中控智慧科技股份有限公司 一种识别图像真伪的方法以及识别装置
CN112199975A (zh) * 2019-07-08 2021-01-08 中国移动通信集团浙江有限公司 基于人脸特征的身份验证方法及装置
CN110765933A (zh) * 2019-10-22 2020-02-07 山西省信息产业技术研究院有限公司 一种应用于驾驶人身份认证系统的动态人像感知比对方法
CN112036277B (zh) * 2020-08-20 2023-09-29 浙江大华技术股份有限公司 一种人脸识别方法、电子设备以及计算机可读存储介质
CN117172783A (zh) * 2023-07-17 2023-12-05 湖北盈嘉集团有限公司 一种应收账款确认债权债务交叉核验系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608450A (zh) * 2016-03-01 2016-05-25 天津中科智能识别产业技术研究院有限公司 基于深度卷积神经网络的异质人脸识别方法
US20160180151A1 (en) * 2014-12-17 2016-06-23 Google Inc. Generating numeric embeddings of images
CN106203533A (zh) * 2016-07-26 2016-12-07 厦门大学 基于混合训练的深度学习人脸验证方法
CN106339695A (zh) * 2016-09-20 2017-01-18 北京小米移动软件有限公司 人脸相似检测方法、装置及终端
CN106780906A (zh) * 2016-12-28 2017-05-31 北京品恩科技股份有限公司 一种基于深度卷积神经网络的人证合一识别方法及系统
CN107577987A (zh) * 2017-08-01 2018-01-12 广州广电卓识智能科技有限公司 身份认证的方法、系统和装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976356A (zh) * 2010-09-30 2011-02-16 惠州市华阳多媒体电子有限公司 网吧实名制人脸识别方法及识别系统
CN105023005B (zh) * 2015-08-05 2018-12-07 王丽婷 人脸识别装置及其识别方法
CN106778607A (zh) * 2016-12-15 2017-05-31 国政通科技股份有限公司 一种基于人脸识别的人与身份证同一性认证装置及方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160180151A1 (en) * 2014-12-17 2016-06-23 Google Inc. Generating numeric embeddings of images
CN105608450A (zh) * 2016-03-01 2016-05-25 天津中科智能识别产业技术研究院有限公司 基于深度卷积神经网络的异质人脸识别方法
CN106203533A (zh) * 2016-07-26 2016-12-07 厦门大学 基于混合训练的深度学习人脸验证方法
CN106339695A (zh) * 2016-09-20 2017-01-18 北京小米移动软件有限公司 人脸相似检测方法、装置及终端
CN106780906A (zh) * 2016-12-28 2017-05-31 北京品恩科技股份有限公司 一种基于深度卷积神经网络的人证合一识别方法及系统
CN107577987A (zh) * 2017-08-01 2018-01-12 广州广电卓识智能科技有限公司 身份认证的方法、系统和装置

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414454A (zh) * 2019-07-31 2019-11-05 南充折衍智能光电科技有限公司 一种基于机器视觉的人证合一识别系统
CN111008550A (zh) * 2019-09-06 2020-04-14 上海芯灵科技有限公司 基于Multiple loss损失函数的指静脉验证身份的识别方法
CN110956080A (zh) * 2019-10-14 2020-04-03 北京海益同展信息科技有限公司 一种图像处理方法、装置、电子设备及存储介质
CN110956080B (zh) * 2019-10-14 2023-11-03 京东科技信息技术有限公司 一种图像处理方法、装置、电子设备及存储介质
CN113158712A (zh) * 2020-08-07 2021-07-23 西安天和防务技术股份有限公司 人员管理系统
CN112699803A (zh) * 2020-12-31 2021-04-23 竹间智能科技(上海)有限公司 人脸识别方法、系统、设备及可读存储介质
CN112699803B (zh) * 2020-12-31 2024-01-16 竹间智能科技(上海)有限公司 人脸识别方法、系统、设备及可读存储介质
CN114049289A (zh) * 2021-11-10 2022-02-15 合肥工业大学 基于对比学习与StyleGAN2的近红外-可见光人脸图像合成方法
CN114049289B (zh) * 2021-11-10 2024-03-05 合肥工业大学 基于对比学习与StyleGAN2的近红外-可见光人脸图像合成方法

Also Published As

Publication number Publication date
CN107577987A (zh) 2018-01-12

Similar Documents

Publication Publication Date Title
WO2019024636A1 (zh) 身份认证的方法、系统和装置
WO2019128367A1 (zh) 基于Triplet Loss的人脸认证方法、装置、计算机设备和存储介质
US9785823B2 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
WO2016150240A1 (zh) 身份认证方法和装置
CN105740779B (zh) 人脸活体检测的方法和装置
CN108229427A (zh) 一种基于身份证件和人脸识别的身份安全验证方法及系统
Raposo et al. UBEAR: A dataset of ear images captured on-the-move in uncontrolled conditions
Connaughton et al. Fusion of face and iris biometrics
Lee et al. Dorsal hand vein recognition based on 2D Gabor filters
WO2016084072A1 (en) Anti-spoofing system and methods useful in conjunction therewith
Ambeth Kumar et al. Exploration of an innovative geometric parameter based on performance enhancement for foot print recognition
CN105740781B (zh) 一种三维人脸活体检测的方法和装置
KR102554391B1 (ko) 홍채 인식 기반 사용자 인증 장치 및 방법
Hollingsworth et al. Iris recognition using signal-level fusion of frames from video
US9449217B1 (en) Image authentication
Bagga et al. Spoofing detection in face recognition: A review
Lee et al. Robust iris recognition baseline for the grand challenge
Pauca et al. Challenging ocular image recognition
Carney et al. A multi-finger touchless fingerprinting system: Mobile fingerphoto and legacy database interoperability
CN109409322B (zh) 活体检测方法、装置及人脸识别方法和人脸检测系统
Garg et al. Biometric authentication using soft biometric traits
WO2023028947A1 (zh) 掌静脉非接触式三维建模方法、装置及认证方法
Khan et al. Investigating linear discriminant analysis (LDA) on dorsal hand vein images
JP2010181970A (ja) 生体認証用装置、生体認証装置、生体認証システム、判別基準決定方法、生体認証方法、及びプログラム
Taha et al. Speeded up robust features descriptor for iris recognition systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18841458

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18841458

Country of ref document: EP

Kind code of ref document: A1