WO2020088029A1 - Liveness detection method, storage medium, and electronic device - Google Patents

Liveness detection method, storage medium, and electronic device Download PDF

Info

Publication number
WO2020088029A1
WO2020088029A1 PCT/CN2019/100261 CN2019100261W WO2020088029A1 WO 2020088029 A1 WO2020088029 A1 WO 2020088029A1 CN 2019100261 W CN2019100261 W CN 2019100261W WO 2020088029 A1 WO2020088029 A1 WO 2020088029A1
Authority
WO
WIPO (PCT)
Prior art keywords
facial
image
depth map
thermal image
feature
Prior art date
Application number
PCT/CN2019/100261
Other languages
French (fr)
Chinese (zh)
Inventor
唐宇晨
邱迪
Original Assignee
北京三快在线科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京三快在线科技有限公司 filed Critical 北京三快在线科技有限公司
Publication of WO2020088029A1 publication Critical patent/WO2020088029A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to a biopsy method, storage medium, and electronic equipment.
  • biometrics may be fingerprint characteristics or human facial characteristics, and so on.
  • the human face is innate with other biological characteristics of the human body (fingerprints, irises, etc.), and its uniqueness and good characteristics that are not easily copied provide the necessary premise for identity authentication.
  • face recognition has the characteristics of non-contact, that is to say, the user can obtain face images without directly contacting the device.
  • multiple faces can be sorted, judged, and recognized, but at present, the accuracy of biopsy is low in face recognition.
  • the purpose of the present disclosure is to provide a biopsy method, storage medium, and electronic equipment to solve the problem of low biopsy accuracy in face recognition in the related art.
  • the present disclosure provides a biopsy method, the method including:
  • the object to be examined belongs to a living body category.
  • the obtaining of the facial depth map and facial thermal image of the object to be inspected includes:
  • an area corresponding to the face position coordinates is used as the face depth map
  • an area corresponding to the facial position coordinates is used as the facial thermal image.
  • the method further includes:
  • the contrast of one or more images in the RGB image, the depth image or the thermal image image is enhanced, and the contrast is used to represent the contrast between the face area and the background area.
  • the feature information includes a feature matrix
  • the feature extraction model includes a convolutional neural network
  • the extracting feature information from the facial depth map and the facial thermal image through the feature extraction model includes:
  • Feature extraction is performed on the fused facial image through the convolutional neural network to obtain the feature matrix.
  • the determining, according to the feature information and the classification model, whether the object to be examined belongs to a living body category includes:
  • the probability value is greater than the probability threshold, it is determined that the object to be examined belongs to the living body category.
  • the classification model includes a fully connected layer and a Softmax classification function
  • the inputting the feature matrix into the classification model to obtain the probability value that the fused facial image belongs to the living body category includes:
  • the probability value that the fused facial image belongs to the living body category is obtained according to the following formula:
  • a i represents the probability value of the i-th category of the Softmax classification function
  • z i is the i-th value in the multi-dimensional feature vector.
  • the fusing the pixel points corresponding to the positions of the facial depth map and the facial thermal image map to obtain the fused facial image includes:
  • the value obtained by weighting and averaging the value of the first pixel in the facial depth map and the value of the second pixel in the target image channel in the facial thermal image is used as the fused facial image
  • the value of the pixel corresponding to the position of the first pixel in, wherein the second pixel corresponds to the position of the first pixel, and the target image channel is any image in the facial thermal image One image channel.
  • the present disclosure provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the steps performed by the following biopsy method:
  • the object to be examined belongs to a living body category.
  • the program is used to implement the following steps when executed by the processor:
  • an area corresponding to the face position coordinates is used as the face depth map
  • an area corresponding to the facial position coordinates is used as the facial thermal image.
  • the program is used to implement the following steps when executed by the processor:
  • the contrast of one or more images in the RGB image, the depth image or the thermal image image is enhanced, and the contrast is used to represent the contrast between the face area and the background area.
  • the feature information includes a feature matrix
  • the feature extraction model includes a convolutional neural network
  • Feature extraction is performed on the fused facial image through the convolutional neural network to obtain the feature matrix.
  • the program is used to implement the following steps when executed by the processor:
  • the probability value is greater than the probability threshold, it is determined that the object to be examined belongs to the living body category.
  • the classification model includes a fully connected layer and a Softmax classification function
  • the probability value that the fused facial image belongs to the living body category is obtained according to the following formula:
  • a i represents the probability value of the i-th category of the Softmax classification function
  • z i is the i-th value in the multi-dimensional feature vector.
  • the program is used to implement the following steps when executed by the processor:
  • the value obtained by weighting and averaging the value of the first pixel in the facial depth map and the value of the second pixel in the target image channel in the facial thermal image is used as the fused facial image
  • the value of the pixel corresponding to the position of the first pixel in, wherein the second pixel corresponds to the position of the first pixel, and the target image channel is any image in the facial thermal image One image channel.
  • an electronic device including:
  • the processor is configured to execute the computer program in the memory to implement the steps performed by the following biopsy method:
  • the object to be examined belongs to a living body category.
  • the processor is used to perform the following steps:
  • an area corresponding to the face position coordinates is used as the face depth map
  • an area corresponding to the facial position coordinates is used as the facial thermal image.
  • the processor is used to perform the following steps:
  • the contrast of one or more images in the RGB image, the depth image or the thermal image image is enhanced, and the contrast is used to represent the contrast between the face area and the background area.
  • the feature information includes a feature matrix
  • the feature extraction model includes a convolutional neural network
  • the processor is used to perform the following steps:
  • Feature extraction is performed on the fused facial image through the convolutional neural network to obtain the feature matrix.
  • the processor is used to perform the following steps:
  • the probability value is greater than the probability threshold, it is determined that the object to be examined belongs to the living body category.
  • the classification model includes a fully connected layer and a Softmax classification function
  • the processor is used to perform the following steps:
  • the probability value that the fused facial image belongs to the living body category is obtained according to the following formula:
  • a i represents the probability value of the i-th category of the Softmax classification function
  • z i is the i-th value in the multi-dimensional feature vector.
  • the processor is used to perform the following steps:
  • the value obtained by weighting and averaging the value of the first pixel in the facial depth map and the value of the second pixel in the target image channel in the facial thermal image is used as the fused facial image
  • the value of the pixel corresponding to the position of the first pixel in, wherein the second pixel corresponds to the position of the first pixel, and the target image channel is any image in the facial thermal image One image channel.
  • Fig. 1 is a flow chart of a method for biopsy according to an exemplary embodiment.
  • Fig. 2 is a flow chart of another method for biopsy according to an exemplary embodiment.
  • FIG. 3 is a schematic diagram of a convolutional layer neural network shown in FIG. 2.
  • FIG. 4 is a schematic diagram of a convolutional layer neural network operation shown in FIG. 2.
  • Fig. 5 is a block diagram of a device for biopsy according to an exemplary embodiment.
  • Fig. 6 is a block diagram of an electronic device according to an exemplary embodiment.
  • the face image of the object to be inspected can be collected by taking a picture of the object to be inspected, and the legal identity authentication of the object to be inspected can be performed by the face image of the object to be inspected.
  • the authentication technology in the related art is based on 2D (two-dimensional) images, it is impossible to accurately determine whether the object to be examined is a living body.
  • some criminals can simulate facial features of legal objects through tools such as masks or three-dimensional face models, thus disguising as legal objects for authentication. In this case, the machine will show that the authentication is passed, allowing the criminals to steal The account information of the legal object is taken, therefore, the authentication technology in the related technology has low accuracy and poor security.
  • the embodiments of the present disclosure provide a biopsy method to improve the accuracy of biopsy during face recognition.
  • Fig. 1 is a flow chart of a method for biopsy according to an exemplary embodiment.
  • the method can be applied to devices with facial recognition functions, such as mobile terminals, cash dispensers, and other electronic devices.
  • the method includes:
  • the electronic device obtains a facial depth map and a facial thermal image of the object to be inspected.
  • the depth map (DepthMap) is an image or image channel containing information about the distance of the surface of the object to be inspected at the viewpoint. Each pixel value of the depth map is the actual distance of the sensor from the object.
  • the thermal image is also called infrared thermal image, which can reflect the temperature distribution on the surface of the object.
  • the acquiring of the facial depth map and the thermal image of the subject to be examined includes: simultaneously acquiring the RGB map, the depth map and the thermal image of the subject to be examined; acquiring the facial position coordinates from the RGB image; The areas where the facial position coordinates are mapped to the depth map and the thermal image map are respectively a facial depth map and a facial thermal image map.
  • acquiring the RGB image, depth map and thermal image of the object to be inspected at the same time means that the electronic device acquires the image of the object to be inspected at the same acquisition time to obtain the RGB image, depth map and heat Like picture.
  • mapping the region of the facial position coordinates to the depth map and the thermal image map as the facial depth map and the facial thermal image map respectively means that the electronic device
  • the area corresponding to the facial position coordinates is used as the facial depth map
  • the thermal image the area corresponding to the facial position coordinates is used as the facial thermal image.
  • the electronic device can simultaneously obtain the RGB image, the depth image, and the thermal image of the object to be inspected through the 3D camera.
  • the RGB image has better facial recognizability, so the facial position coordinates of the object to be inspected can be obtained through the RGB image.
  • the RGB image, the depth image and the thermal image of the object to be inspected can also be obtained from the video.
  • the acquired RGB image, depth image and thermal image The image can correspond to the same frame in the video file.
  • the pixels of the images are in a corresponding relationship, which is beneficial to the alignment of the pixel coordinates of the three images.
  • the above three images can be pre-processed by histogram equalization to enhance the contrast between the face area and the background area, thereby highlighting the effective information corresponding to the facial features. In this way, the accuracy of subsequent feature extraction can be improved.
  • the preprocessing process may include the following steps: the electronic device enhances the contrast of one or more images in the RGB map, the depth map, or the thermal image map through histogram equalization, and the contrast Used to represent the contrast between the face area and the background area.
  • the electronic device inputs the facial depth map and the facial thermal image into the special effect extraction model, and extracts feature information from the facial depth map and the facial thermal image through the feature extraction model.
  • facial features such as the highest point of the face, the lowest point of the face, and the relative position relationship between the highest point and the lowest point can be obtained from the depth map.
  • the temperature distribution on the face is also different. For example, in winter, the temperature near the mouth and nose is high, and the temperature near the edge of the face is low.
  • the above facial features can form certain laws, and preset living body facial feature information can be generated according to these laws.
  • the feature information may include a feature matrix
  • the feature extraction model may include a convolutional neural network.
  • the feature information extraction in S12 may include the following steps: the electronic device fuse the facial depth map Pixels corresponding to the positions in the facial thermal image diagram are used to obtain a fused facial image, and feature extraction is performed on the fused facial image through the convolutional neural network to obtain the feature matrix.
  • the electronic device extracting the feature matrix of the facial depth map and the facial thermal image through a convolutional neural network.
  • These feature matrices can further input the decision model of the corresponding category corresponding to the feature, that is, the preset classification function model used in conjunction with the pre-trained convolutional neural network. Therefore, the above decision model can also be referred to as "Model", by using a classification model in conjunction with a convolutional neural network, it is possible to determine whether the object to be examined belongs to a living body category.
  • the electronic device determines whether the object to be inspected belongs to the living body category according to the feature information and the judgment model of the category corresponding to the feature.
  • the electronic device determines whether the object to be examined belongs to the living body category based on the feature information and the classification model. Further, when the electronic device determines whether the object to be examined is a living body category, it may perform the following steps: input the feature matrix into the classification model to obtain a probability value that the fused facial image belongs to the living body category; If the probability value is greater than the probability threshold, it is determined that the object to be examined belongs to the living body category.
  • the judgment model (that is, the classification model) of the corresponding category corresponding to the feature includes preset living facial feature information, and the feature information can be compared with the preset living facial feature information, if the extraction is obtained this time
  • the degree of matching between the feature information and the preset living body facial feature information is greater than a preset threshold, it is determined that the object to be examined is a living body; otherwise, it is determined that the object to be examined is not a living body.
  • the object to be inspected is non-living means that the object to be inspected is illegal this time. Further, when the operation related to facial recognition is specifically performed, the object to be inspected may be prohibited from accessing the account or generating an alarm message.
  • the object to be inspected By obtaining a facial depth map and a facial thermal image of the object to be inspected, and extracting feature information from the facial depth map and the facial thermal image through a preset feature extraction model, and further according to the feature information, and The judgment model of the corresponding category corresponding to the feature determines whether the object to be inspected belongs to the living body category. In this way, the accuracy of the biopsy is improved, and the mask or facial model can be identified as a legitimate person to invade.
  • the above technical solution improves the stability and robustness of the biopsy compared with the biopsy performed solely by thermal imaging.
  • the above technical solution does not require special cooperation of the subject to be examined, which improves the user experience and improves the biopsy operation during facial recognition effectiveness.
  • Fig. 2 is a flow chart of another method for biopsy according to an exemplary embodiment.
  • the method can be applied to devices with facial recognition functions, such as mobile terminals, cash dispensers, and other electronic devices.
  • the method includes:
  • the electronic device obtains a facial depth map and a facial thermal image of the object to be inspected.
  • the depth map (DepthMap) is an image or image channel containing information about the distance of the surface of the object to be inspected at the viewpoint. Each pixel value of the depth map is the actual distance of the sensor from the object.
  • the thermal image is also called infrared thermal image, which can reflect the temperature distribution on the surface of the object.
  • the acquiring of the facial depth map and the thermal image of the subject to be examined includes: simultaneously acquiring the RGB map, the depth map and the thermal image of the subject to be examined; acquiring the facial position coordinates from the RGB image; The areas where the facial position coordinates are mapped to the depth map and the thermal image map are respectively a facial depth map and a facial thermal image map.
  • the RGB image has better facial recognizability, so the facial position coordinates of the object to be inspected can be obtained through the RGB image.
  • the pixels of the images are in a corresponding relationship, which is beneficial to the alignment of the pixel coordinates of the three images.
  • the above three graphics of the image can be pre-processed by histogram equalization to enhance the contrast between the face area and the background area, thereby highlighting the effective information corresponding to the facial features. In this way, the accuracy of subsequent feature extraction can be improved.
  • the electronic device fuses the pixel points corresponding to positions in the facial depth map and the facial thermal image map to obtain a fused facial image.
  • the fusing pixel points corresponding to the face depth map and the face thermal image map to obtain a fused face image includes: combining values of first pixels in the face depth map, and the The value obtained by weighted averaging the value of the second pixel in the target image channel in the facial thermal image is used as the value of the pixel corresponding to the position of the first pixel in the fused facial image, where, The second pixel corresponds to the position of the first pixel, and the target image channel is any image channel in the facial thermal image.
  • the fusion pixel C ( i, j) can be obtained by the following formula:
  • Weight by adjusting the corresponding portion of the depth map weight w A (i, j) and the corresponding facial thermography right of FIG weight fused images w B (i, j), can be more flexibly adjust the resulting depth distribution of elements and temperature distribution factors share
  • the proportion of the face feature matrix makes the extracted facial feature matrix more clearly reflect the facial features of the object to be tested.
  • the fused facial image can extract the favorable information of the depth map and the thermal image map to the maximum extent, increase the information content of the resulting image, and facilitate more accurate, reliable and comprehensive acquisition of image features.
  • the electronic device extracts the feature matrix of the fused facial image through a pre-trained convolutional neural network model.
  • Convolutional neural network is a multi-layer neural network, including convolutional layer and pooling layer. Artificial neurons can respond to the surrounding units, continuously reduce the dimensionality of image recognition problems with huge amounts of data, and finally enable them to be trained to achieve classification, positioning, detection and other functions and functions. In addition, convolutional neural networks can be trained. The training method of convolutional neural network needs to use the chain derivation rule to differentiate the nodes of the hidden layer, that is, the gradient propagation and chain derivation back propagation rules.
  • the schematic diagram of a convolutional neural network shown in FIG. 3 is a schematic diagram of a structure of a neural network provided by an embodiment of the present disclosure, that is, from an input layer (hidden layer) to an output layer (hidden layer).
  • the hidden layer includes many different layers (convolutional layer, pooling layer, activation function layer, fully connected layer, etc.).
  • the feature extraction is completed by downsampling of the convolutional layer, and a feature matrix is generated. Please refer to the operation schematic diagram provided by the embodiment of the present disclosure shown in FIG. 4.
  • the above-mentioned fused facial image may be an input matrix of 7 * 7 in the figure, and each pixel in the matrix is a source pixel.
  • a 3 * 3 filter window traverses the input matrix and performs a convolution operation to obtain the value output after the convolution operation.
  • the filtering window is also called convolution matrix (convolution kernel).
  • the center value (pixels with a value of 1) in the window matrix selected in the input matrix in Figure 4 is replaced by the result of the convolution operation, that is, the pixels with a value of -8.
  • the electronic device inputs the feature matrix into a preset classification function model (that is, a classification model) to obtain a probability value that the fused facial image belongs to a living body category.
  • a preset classification function model that is, a classification model
  • the preset classification function model (that is, the classification model) may include a fully connected layer and a Softmax classification function. Specifically, by inputting the feature matrix into the Softmax classification function, the output of the obtained multiple neurons can be mapped into the interval of 0 to 1. That is to say, the fused image is extracted through the convolutional neural network and input to the Softmax layer, and finally the probability value corresponding to each category is output.
  • the image feature matrix can also be fully connected layer transformed, that is to say, the electronic device inputs the feature matrix into the fully connected layer and outputs a multi-dimensional feature vector.
  • the number of dimensions of the multi-dimensional feature vector corresponds to the number of categories of the Softmax classification function.
  • the output of the neuron can be expressed by the following formula:
  • x ij is the j-th input value of the i-th neuron
  • w ij is the j-th weight of the i-th neuron
  • b is the offset value
  • z i represents the i-th output of the network, and That is, the i-th value in the multi-dimensional feature vector.
  • the electronic device may determine the probability value that the fused facial image belongs to the living body category according to the following formula:
  • a i represents the probability value of the i-th category of Softmax
  • z i is the i-th value in the multi-dimensional feature vector
  • e is a constant.
  • the Softmax classification function can have two categories, the first category is "non-living" and the corresponding probability value is a 1 ; the second category is "living” and the corresponding probability value is a 2 . Then, through the above probability calculation formula, the probability value a 2 that the fused facial image belongs to the living body category can be obtained.
  • the electronic device determines that the object to be detected belongs to a living body category.
  • the preset probability threshold is 0.5. If the probability value P belonging to the living body category is greater than 0.5, it can be determined that the object to be tested is a living body; otherwise, the probability value P belonging to the living body category is not greater than 0.5, it is determined that the object to be tested is a non-living body.
  • the object to be inspected is non-living means that the object to be inspected is illegal this time. Further, when the operation related to facial recognition is specifically performed, the object to be inspected may be prohibited from accessing the account or generating an alarm message.
  • the object to be inspected By obtaining a facial depth map and a facial thermal image of the object to be inspected, and extracting feature information from the facial depth map and the facial thermal image through a preset feature extraction model, and further according to the feature information, and The judgment model of the corresponding category corresponding to the feature determines whether the object to be inspected belongs to the living body category. In this way, the accuracy of the biopsy is improved, and the mask or facial model can be identified as a legitimate person to invade.
  • the above technical solution improves the stability and robustness of the biopsy compared with the biopsy performed solely by thermal imaging.
  • the above technical solution does not require special cooperation of the subject to be examined, which improves the user experience and improves the biopsy operation during facial recognition effectiveness.
  • Fig. 5 is a block diagram of a device for biopsy according to an exemplary embodiment.
  • the device can be applied to devices with facial recognition functions, such as mobile terminals, cash dispensers and other electronic devices.
  • the device includes:
  • the obtaining module 310 is used to obtain a facial depth map and a thermal image of the subject to be inspected;
  • a feature extraction module 320 configured to extract feature information from the facial depth map and the facial thermal image through a preset feature extraction model
  • the determining module 330 is configured to determine whether the object to be examined belongs to the living body category according to the feature information and the determination model of the corresponding category corresponding to the feature.
  • the object to be inspected By obtaining a facial depth map and a facial thermal image of the object to be inspected, and extracting feature information from the facial depth map and the facial thermal image through a preset feature extraction model, and further according to the feature information, and The judgment model of the corresponding category corresponding to the feature determines whether the object to be inspected belongs to the living body category. In this way, the accuracy of the biopsy is improved, and the mask or facial model can be identified as a legitimate person to invade.
  • the above technical solution improves the stability and robustness of the biopsy compared with the biopsy performed solely by thermal imaging.
  • the above technical solution does not require special cooperation of the subject to be examined, which improves the user experience and improves the biopsy operation during facial recognition effectiveness.
  • the acquisition module is used to:
  • the regions where the facial position coordinates are mapped to the depth map and the thermal image map are used as the facial depth map and the facial thermal image map, respectively.
  • the acquisition module is used to:
  • the feature extraction module is used to fuse pixel points corresponding to the facial depth map and the facial thermal image to obtain a fused facial image; extract the fusion through a pre-trained convolutional neural network model The feature matrix of the facial image;
  • the determining module is configured to input the image feature matrix into a preset classification function model to obtain a probability value that the fused facial image belongs to a living body class; if the probability value is greater than a preset probability threshold, then It is determined that the object to be examined is a living body.
  • the preset classification function model is a Softmax classification function
  • the determination module is used to:
  • the probability value that the fused facial image belongs to the living body class is determined according to the following formula:
  • a i represents the probability value of the i-th category of Softmax
  • z i is the i-th value in the multi-dimensional feature vector
  • e is a constant.
  • the feature extraction module is configured to compare the value of the first pixel in the facial depth map and the second corresponding to the first pixel in the target image channel in the facial thermal image
  • the value obtained by weighted averaging of pixel values is used as the value of the pixel corresponding to the first pixel in the fused facial image, wherein the target image channel is any image in the facial thermal image One image channel.
  • An embodiment of the present disclosure provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the steps performed by the following biopsy method:
  • the object to be examined belongs to a living body category.
  • the program is used to implement the following steps when executed by the processor:
  • an area corresponding to the face position coordinates is used as the face depth map
  • an area corresponding to the facial position coordinates is used as the facial thermal image.
  • the program is used to implement the following steps when executed by the processor:
  • the contrast of one or more images in the RGB image, the depth image or the thermal image image is enhanced, and the contrast is used to represent the contrast between the face area and the background area.
  • the feature information includes a feature matrix
  • the feature extraction model includes a convolutional neural network
  • Feature extraction is performed on the fused facial image through the convolutional neural network to obtain the feature matrix.
  • the program is used to implement the following steps when executed by the processor:
  • the probability value is greater than the probability threshold, it is determined that the object to be examined belongs to the living body category.
  • the classification model includes a fully connected layer and a Softmax classification function
  • the probability value that the fused facial image belongs to the living body category is obtained according to the following formula:
  • a i represents the probability value of the i-th category of the Softmax classification function
  • z i is the i-th value in the multi-dimensional feature vector
  • e is a constant.
  • the program is used to implement the following steps when executed by the processor:
  • the value obtained by weighting and averaging the value of the first pixel in the facial depth map and the value of the second pixel in the target image channel in the facial thermal image is used as the fused facial image
  • the value of the pixel corresponding to the position of the first pixel in, wherein the second pixel corresponds to the position of the first pixel, and the target image channel is any image in the facial thermal image One image channel.
  • An embodiment of the present disclosure provides an electronic device, including:
  • the processor is configured to execute the computer program in the memory to implement the steps performed by the following biopsy method:
  • the object to be examined belongs to a living body category.
  • the processor is used to perform the following steps:
  • an area corresponding to the face position coordinates is used as the face depth map
  • an area corresponding to the facial position coordinates is used as the facial thermal image.
  • the processor is used to perform the following steps:
  • the contrast of one or more images in the RGB image, the depth image or the thermal image image is enhanced, and the contrast is used to represent the contrast between the face area and the background area.
  • the feature information includes a feature matrix
  • the feature extraction model includes a convolutional neural network
  • the processor is used to perform the following steps:
  • Feature extraction is performed on the fused facial image through the convolutional neural network to obtain the feature matrix.
  • the processor is used to perform the following steps:
  • the probability value is greater than the probability threshold, it is determined that the object to be examined belongs to the living body category.
  • the classification model includes a fully connected layer and a Softmax classification function
  • the processor is used to perform the following steps:
  • the probability value that the fused facial image belongs to the living body category is obtained according to the following formula:
  • a i represents the probability value of the i-th category of the Softmax classification function
  • z i is the i-th value in the multi-dimensional feature vector
  • e is a constant.
  • the processor is used to perform the following steps:
  • the value obtained by weighting and averaging the value of the first pixel in the facial depth map and the value of the second pixel in the target image channel in the facial thermal image is used as the fused facial image
  • the value of the pixel corresponding to the position of the first pixel in, wherein the second pixel corresponds to the position of the first pixel, and the target image channel is any image in the facial thermal image One image channel.
  • Fig. 6 is a block diagram of an electronic device according to an exemplary embodiment.
  • the electronic device 400 may include: a processor 401 and a memory 402.
  • the electronic device 400 may also include one or more of a multimedia component 403, an input / output (I / O) interface 404, and a communication component 405.
  • the processor 401 is used to control the overall operation of the electronic device 400 to complete all or part of the steps in the method for biopsy.
  • the memory 402 is used to store various types of data to support operation on the electronic device 400, and the data may include, for example, instructions for any application or method for operating on the electronic device 400, and application-related data, For example, the pre-trained convolutional neural network model, the thermal image and depth map data of the object to be detected, in addition, it can also include the identity data of legitimate users, messages sent and received, audio, video, and so on.
  • the memory 402 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (Static Random Access Memory, SRAM for short), electrically erasable programmable read-only memory ( Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), read-only Memory (Read-Only Memory, ROM for short), magnetic memory, flash memory, magnetic disk or optical disk.
  • the multimedia component 403 may include a screen and an audio component.
  • the screen may be, for example, a touch screen, and the audio component is used to output and / or input audio signals.
  • the audio component may include a microphone for receiving external audio signals.
  • the received audio signal may be further stored in the memory 402 or transmitted through the communication component 405.
  • the audio component also includes at least one speaker for outputting audio signals.
  • the I / O interface 404 provides an interface between the processor 401 and other interface modules.
  • the other interface modules may be a keyboard, a mouse, a button, and so on. These buttons can be virtual buttons or physical buttons.
  • the communication component 405 is used for wired or wireless communication between the electronic device 400 and other devices. Wireless communication, such as Wi-Fi, Bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so the corresponding communication component 405 may include: Wi-Fi module, Bluetooth module, NFC module.
  • the electronic device 400 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit (ASIC), digital signal processor (DSP), digital signal processing device (Digital Signal Processing (Device DSP), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor or other electronic components Implementation, for performing the method for biopsy described above.
  • ASIC Application Specific Integrated Circuit
  • DSP digital signal processor
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic components Implementation, for performing the method for biopsy described above.
  • a computer-readable storage medium including program instructions is also provided, which when executed by a processor implements the steps of the method for biopsy described above.
  • the computer-readable storage medium may be the above-mentioned memory 402 including program instructions, which may be executed by the processor 401 of the electronic device 400 to complete the above-described method for biopsy.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure is intended to provide a liveness detection method, a storage medium, and an electronic device, so as to solve the problem in related art of low liveness detection accuracy during face recognition. The method comprises: obtaining a face depth map and a face thermal image of an object to be detected; inputting the face depth map and the face thermal image into a feature extraction model, and extracting feature information from the face depth map and the face thermal image by means of the feature extraction model; and determining, according to the feature information and a classification model, whether the object to be detected belongs to a liveness category.

Description

活体检验方法、存储介质和电子设备Biopsy method, storage medium and electronic equipment
本申请要求于2018年10月29日提交的申请号为201811269906.8、发明名称为“用于活体检验的方法和装置,存储介质和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application requires the priority of the Chinese patent application filed on October 29, 2018 with the application number 201811269906.8 and the invention titled "Methods and devices for biopsy, storage media and electronic equipment". In this application.
技术领域Technical field
本公开涉及图像处理领域,具体地,涉及一种活体检验方法、存储介质和电子设备。The present disclosure relates to the field of image processing, and in particular, to a biopsy method, storage medium, and electronic equipment.
背景技术Background technique
随着科技的发展,数据处理效率也随之提升,其中,合法身份的认证方式也发生了日新月异的变化。相关技术中,提出了通过采集待验证者的生物特征来进行用户合法身份验证的方案,这些生物特征可以是指纹特征,也可以是人的面部特征等等。With the development of technology, the efficiency of data processing has also increased. Among them, the authentication method of legal identity has also undergone rapid changes. In the related art, a scheme for validating the user's legal identity by collecting the biometrics of the person to be verified is proposed. These biometrics may be fingerprint characteristics or human facial characteristics, and so on.
人脸与人体的其它生物特征(指纹、虹膜等)一样与生俱来,它的唯一性和不易被复制的良好特性为身份鉴别提供了必要的前提。与其它类型的生物识别比较,人脸识别具有非接触性的特点,也就是说用户不需要和设备直接接触,设备就能获取人脸图像。此外,在实际应用场景下还可以进行多个人脸的分拣、判断及识别,但目前,在人脸识别时活体检验准确度较低。The human face is innate with other biological characteristics of the human body (fingerprints, irises, etc.), and its uniqueness and good characteristics that are not easily copied provide the necessary premise for identity authentication. Compared with other types of biometrics, face recognition has the characteristics of non-contact, that is to say, the user can obtain face images without directly contacting the device. In addition, in actual application scenarios, multiple faces can be sorted, judged, and recognized, but at present, the accuracy of biopsy is low in face recognition.
发明内容Summary of the invention
本公开的目的是提供一种活体检验方法、存储介质和电子设备,以解决相关技术中在进行人脸识别时活体检验准确度较低的问题。The purpose of the present disclosure is to provide a biopsy method, storage medium, and electronic equipment to solve the problem of low biopsy accuracy in face recognition in the related art.
为了实现上述目的,第一方面,本公开提供活体检验方法,所述方法包括:To achieve the above objective, in a first aspect, the present disclosure provides a biopsy method, the method including:
获取待检对象的面部深度图和面部热像图;Obtain the facial depth map and facial thermal image of the subject to be inspected;
将所述面部深度图和面部热像图输入特征提取模型,通过所述特征提取模型,从所述面部深度图和所述面部热像图提取特征信息;Input the facial depth map and the facial thermal image into a feature extraction model, and extract feature information from the facial depth map and the facial thermal image through the feature extraction model;
根据所述特征信息以及分类模型,确定所述待检对象是否属于活体类别。According to the feature information and the classification model, it is determined whether the object to be examined belongs to a living body category.
可选的,所述获取待检对象的面部深度图和面部热像图,包括:Optionally, the obtaining of the facial depth map and facial thermal image of the object to be inspected includes:
在同一采集时刻,获取待检对象的RGB图、深度图和热像图;At the same acquisition time, obtain the RGB image, depth map and thermal image of the object to be inspected;
从所述RGB图像中获取面部位置坐标;Acquiring facial position coordinates from the RGB image;
在所述深度图中,将与所述面部位置坐标对应的区域作为所述面部深度图;In the depth map, an area corresponding to the face position coordinates is used as the face depth map;
在所述热像图中,将与所述面部位置坐标对应的区域作为所述面部热像图。In the thermal image, an area corresponding to the facial position coordinates is used as the facial thermal image.
可选的,所述方法还包括:Optionally, the method further includes:
通过直方图均衡,增强所述RGB图、所述深度图或所述热像图中一种或多种图像的对比度, 所述对比度用于表示面部区域与背景区域之间的反差。Through the histogram equalization, the contrast of one or more images in the RGB image, the depth image or the thermal image image is enhanced, and the contrast is used to represent the contrast between the face area and the background area.
可选的,所述特征信息包括特征矩阵,所述特征提取模型包括卷积神经网络;Optionally, the feature information includes a feature matrix, and the feature extraction model includes a convolutional neural network;
所述通过所述特征提取模型,从所述面部深度图和所述面部热像图提取特征信息包括:The extracting feature information from the facial depth map and the facial thermal image through the feature extraction model includes:
融合所述面部深度图和所述面部热像图中位置对应的像素点,得到融合后的面部图像;Fusing pixel points corresponding to positions in the facial depth map and the facial thermal image map to obtain a fused facial image;
通过所述卷积神经网络对所述融合后的面部图像进行特征提取,得到所述特征矩阵。Feature extraction is performed on the fused facial image through the convolutional neural network to obtain the feature matrix.
可选的,所述根据所述特征信息以及分类模型,确定所述待检对象是否属于活体类别包括:Optionally, the determining, according to the feature information and the classification model, whether the object to be examined belongs to a living body category includes:
将所述特征矩阵输入所述分类模型,得到所述融合后的面部图像属于活体类别的概率值;Input the feature matrix into the classification model to obtain the probability value that the fused facial image belongs to the living body category;
若所述概率值大于概率阈值,则确定所述待检对象属于活体类别。If the probability value is greater than the probability threshold, it is determined that the object to be examined belongs to the living body category.
可选的,所述分类模型包括全连接层和Softmax分类函数;Optionally, the classification model includes a fully connected layer and a Softmax classification function;
所述将所述特征矩阵输入所述分类模型,得到所述融合后的面部图像属于活体类别的概率值包括:The inputting the feature matrix into the classification model to obtain the probability value that the fused facial image belongs to the living body category includes:
对所述特征矩阵输入所述全连接层,输出多维特征向量,其中,所述多维特征向量的维度数目对应于所述Softmax分类函数的类别数目;Input the fully connected layer to the feature matrix and output a multi-dimensional feature vector, wherein the number of dimensions of the multi-dimensional feature vector corresponds to the number of categories of the Softmax classification function;
根据如下公式得到所述融合后的面部图像属于活体类别的概率值:The probability value that the fused facial image belongs to the living body category is obtained according to the following formula:
Figure PCTCN2019100261-appb-000001
Figure PCTCN2019100261-appb-000001
其中,a i代表Softmax分类函数的第i个类别的概率值,z i为所述多维特征向量中的第i个值。 Where a i represents the probability value of the i-th category of the Softmax classification function, and z i is the i-th value in the multi-dimensional feature vector.
可选的,所述融合所述面部深度图和所述面部热像图中位置对应的像素点,得到融合后的面部图像包括:Optionally, the fusing the pixel points corresponding to the positions of the facial depth map and the facial thermal image map to obtain the fused facial image includes:
将所述面部深度图中的第一像素点的值,以及所述面部热像图中目标图像通道中第二像素点的值进行加权平均后所得到的值,作为所述融合后的面部图像中与所述第一像素点位置对应的像素点的值,其中,所述第二像素点与所述第一像素点位置相对应,所述目标图像通道是所述面部热像图中的任一图像通道。The value obtained by weighting and averaging the value of the first pixel in the facial depth map and the value of the second pixel in the target image channel in the facial thermal image is used as the fused facial image The value of the pixel corresponding to the position of the first pixel in, wherein the second pixel corresponds to the position of the first pixel, and the target image channel is any image in the facial thermal image One image channel.
第二方面,本公开提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现下述活体检验方法所执行的步骤:In a second aspect, the present disclosure provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the steps performed by the following biopsy method:
获取待检对象的面部深度图和面部热像图;Obtain the facial depth map and facial thermal image of the subject to be inspected;
将所述面部深度图和面部热像图输入特征提取模型,通过所述特征提取模型,从所述面部深度图和所述面部热像图提取特征信息;Input the facial depth map and the facial thermal image into a feature extraction model, and extract feature information from the facial depth map and the facial thermal image through the feature extraction model;
根据所述特征信息以及分类模型,确定所述待检对象是否属于活体类别。According to the feature information and the classification model, it is determined whether the object to be examined belongs to a living body category.
可选的,所述程序被处理器执行时用于实现下述步骤:Optionally, the program is used to implement the following steps when executed by the processor:
在同一采集时刻,获取待检对象的RGB图、深度图和热像图;At the same acquisition time, obtain the RGB image, depth map and thermal image of the object to be inspected;
从所述RGB图像中获取面部位置坐标;Acquiring facial position coordinates from the RGB image;
在所述深度图中,将与所述面部位置坐标对应的区域作为所述面部深度图;In the depth map, an area corresponding to the face position coordinates is used as the face depth map;
在所述热像图中,将与所述面部位置坐标对应的区域作为所述面部热像图。In the thermal image, an area corresponding to the facial position coordinates is used as the facial thermal image.
可选的,所述程序被处理器执行时用于实现下述步骤:Optionally, the program is used to implement the following steps when executed by the processor:
通过直方图均衡,增强所述RGB图、所述深度图或所述热像图中一种或多种图像的对比度,所述对比度用于表示面部区域与背景区域之间的反差。Through histogram equalization, the contrast of one or more images in the RGB image, the depth image or the thermal image image is enhanced, and the contrast is used to represent the contrast between the face area and the background area.
可选的,所述特征信息包括特征矩阵,所述特征提取模型包括卷积神经网络;Optionally, the feature information includes a feature matrix, and the feature extraction model includes a convolutional neural network;
所述程序被处理器执行时用于实现下述步骤:When the program is executed by the processor, the following steps are implemented:
融合所述面部深度图和所述面部热像图中位置对应的像素点,得到融合后的面部图像;Fusing pixel points corresponding to positions in the facial depth map and the facial thermal image map to obtain a fused facial image;
通过所述卷积神经网络对所述融合后的面部图像进行特征提取,得到所述特征矩阵。Feature extraction is performed on the fused facial image through the convolutional neural network to obtain the feature matrix.
可选的,所述程序被处理器执行时用于实现下述步骤:Optionally, the program is used to implement the following steps when executed by the processor:
将所述特征矩阵输入所述分类模型,得到所述融合后的面部图像属于活体类别的概率值;Input the feature matrix into the classification model to obtain the probability value that the fused facial image belongs to the living body category;
若所述概率值大于概率阈值,则确定所述待检对象属于活体类别。If the probability value is greater than the probability threshold, it is determined that the object to be examined belongs to the living body category.
可选的,所述分类模型包括全连接层和Softmax分类函数;Optionally, the classification model includes a fully connected layer and a Softmax classification function;
所述程序被处理器执行时用于实现下述步骤:When the program is executed by the processor, the following steps are implemented:
对所述特征矩阵输入所述全连接层,输出多维特征向量,其中,所述多维特征向量的维度数目对应于所述Softmax分类函数的类别数目;Input the fully connected layer to the feature matrix and output a multi-dimensional feature vector, wherein the number of dimensions of the multi-dimensional feature vector corresponds to the number of categories of the Softmax classification function;
根据如下公式得到所述融合后的面部图像属于活体类别的概率值:The probability value that the fused facial image belongs to the living body category is obtained according to the following formula:
Figure PCTCN2019100261-appb-000002
Figure PCTCN2019100261-appb-000002
其中,a i代表Softmax分类函数的第i个类别的概率值,z i为所述多维特征向量中的第i个值。 Where a i represents the probability value of the i-th category of the Softmax classification function, and z i is the i-th value in the multi-dimensional feature vector.
可选的,所述程序被处理器执行时用于实现下述步骤:Optionally, the program is used to implement the following steps when executed by the processor:
将所述面部深度图中的第一像素点的值,以及所述面部热像图中目标图像通道中第二像素点的值进行加权平均后所得到的值,作为所述融合后的面部图像中与所述第一像素点位置对应的像素点的值,其中,所述第二像素点与所述第一像素点位置相对应,所述目标图像通道是所述面部热像图中的任一图像通道。The value obtained by weighting and averaging the value of the first pixel in the facial depth map and the value of the second pixel in the target image channel in the facial thermal image is used as the fused facial image The value of the pixel corresponding to the position of the first pixel in, wherein the second pixel corresponds to the position of the first pixel, and the target image channel is any image in the facial thermal image One image channel.
第三方面,本公开提供一种电子设备,包括:In a third aspect, the present disclosure provides an electronic device, including:
存储器,其上存储有计算机程序;Memory, on which computer programs are stored;
处理器,用于执行所述存储器中的所述计算机程序,以实现下述活体检验方法所执行的步 骤:The processor is configured to execute the computer program in the memory to implement the steps performed by the following biopsy method:
获取待检对象的面部深度图和面部热像图;Obtain the facial depth map and facial thermal image of the subject to be inspected;
将所述面部深度图和面部热像图输入特征提取模型,通过所述特征提取模型,从所述面部深度图和所述面部热像图提取特征信息;Input the facial depth map and the facial thermal image into a feature extraction model, and extract feature information from the facial depth map and the facial thermal image through the feature extraction model;
根据所述特征信息以及分类模型,确定所述待检对象是否属于活体类别。According to the feature information and the classification model, it is determined whether the object to be examined belongs to a living body category.
可选的,所述处理器用于执行下述步骤:Optionally, the processor is used to perform the following steps:
在同一采集时刻,获取待检对象的RGB图、深度图和热像图;At the same acquisition time, obtain the RGB image, depth map and thermal image of the object to be inspected;
从所述RGB图像中获取面部位置坐标;Acquiring facial position coordinates from the RGB image;
在所述深度图中,将与所述面部位置坐标对应的区域作为所述面部深度图;In the depth map, an area corresponding to the face position coordinates is used as the face depth map;
在所述热像图中,将与所述面部位置坐标对应的区域作为所述面部热像图。In the thermal image, an area corresponding to the facial position coordinates is used as the facial thermal image.
可选的,所述处理器用于执行下述步骤:Optionally, the processor is used to perform the following steps:
通过直方图均衡,增强所述RGB图、所述深度图或所述热像图中一种或多种图像的对比度,所述对比度用于表示面部区域与背景区域之间的反差。Through histogram equalization, the contrast of one or more images in the RGB image, the depth image or the thermal image image is enhanced, and the contrast is used to represent the contrast between the face area and the background area.
可选的,所述特征信息包括特征矩阵,所述特征提取模型包括卷积神经网络;Optionally, the feature information includes a feature matrix, and the feature extraction model includes a convolutional neural network;
所述处理器用于执行下述步骤:The processor is used to perform the following steps:
融合所述面部深度图和所述面部热像图中位置对应的像素点,得到融合后的面部图像;Fusing pixel points corresponding to positions in the facial depth map and the facial thermal image map to obtain a fused facial image;
通过所述卷积神经网络对所述融合后的面部图像进行特征提取,得到所述特征矩阵。Feature extraction is performed on the fused facial image through the convolutional neural network to obtain the feature matrix.
可选的,所述处理器用于执行下述步骤:Optionally, the processor is used to perform the following steps:
将所述特征矩阵输入所述分类模型,得到所述融合后的面部图像属于活体类别的概率值;Input the feature matrix into the classification model to obtain the probability value that the fused facial image belongs to the living body category;
若所述概率值大于概率阈值,则确定所述待检对象属于活体类别。If the probability value is greater than the probability threshold, it is determined that the object to be examined belongs to the living body category.
可选的,所述分类模型包括全连接层和Softmax分类函数;Optionally, the classification model includes a fully connected layer and a Softmax classification function;
所述处理器用于执行下述步骤:The processor is used to perform the following steps:
对所述特征矩阵输入所述全连接层,输出多维特征向量,其中,所述多维特征向量的维度数目对应于所述Softmax分类函数的类别数目;Input the fully connected layer to the feature matrix and output a multi-dimensional feature vector, wherein the number of dimensions of the multi-dimensional feature vector corresponds to the number of categories of the Softmax classification function;
根据如下公式得到所述融合后的面部图像属于活体类别的概率值:The probability value that the fused facial image belongs to the living body category is obtained according to the following formula:
Figure PCTCN2019100261-appb-000003
Figure PCTCN2019100261-appb-000003
其中,a i代表Softmax分类函数的第i个类别的概率值,z i为所述多维特征向量中的第i个值。 Where a i represents the probability value of the i-th category of the Softmax classification function, and z i is the i-th value in the multi-dimensional feature vector.
可选的,所述处理器用于执行下述步骤:Optionally, the processor is used to perform the following steps:
将所述面部深度图中的第一像素点的值,以及所述面部热像图中目标图像通道中第二像素 点的值进行加权平均后所得到的值,作为所述融合后的面部图像中与所述第一像素点位置对应的像素点的值,其中,所述第二像素点与所述第一像素点位置相对应,所述目标图像通道是所述面部热像图中的任一图像通道。The value obtained by weighting and averaging the value of the first pixel in the facial depth map and the value of the second pixel in the target image channel in the facial thermal image is used as the fused facial image The value of the pixel corresponding to the position of the first pixel in, wherein the second pixel corresponds to the position of the first pixel, and the target image channel is any image in the facial thermal image One image channel.
附图说明BRIEF DESCRIPTION
附图是用来提供对本公开的进一步理解,并且构成说明书的一部分,与下面的具体实施方式一起用于解释本公开,但并不构成对本公开的限制。在附图中:The drawings are used to provide a further understanding of the present disclosure, and constitute a part of the specification, together with the following specific embodiments to explain the present disclosure, but do not constitute a limitation of the present disclosure. In the drawings:
图1是根据一示例性实施例示出的一种用于活体检验的方法流程图。Fig. 1 is a flow chart of a method for biopsy according to an exemplary embodiment.
图2是根据一示例性实施例示出的另一种用于活体检验的方法流程图。Fig. 2 is a flow chart of another method for biopsy according to an exemplary embodiment.
图3是根据图2示出的一种卷积层神经网络示意图。FIG. 3 is a schematic diagram of a convolutional layer neural network shown in FIG. 2.
图4是根据图2示出的一种卷积层神经网络运算示意图。4 is a schematic diagram of a convolutional layer neural network operation shown in FIG. 2.
图5是根据一示例性实施例示出的一种用于活体检验的装置框图。Fig. 5 is a block diagram of a device for biopsy according to an exemplary embodiment.
图6是根据一示例性实施例示出的一种电子设备框图。Fig. 6 is a block diagram of an electronic device according to an exemplary embodiment.
具体实施方式detailed description
以下结合附图对本公开的具体实施方式进行详细说明。应当理解的是,此处所描述的具体实施方式仅用于说明和解释本公开,并不用于限制本公开。The specific embodiments of the present disclosure will be described in detail below with reference to the drawings. It should be understood that the specific embodiments described herein are only used to illustrate and explain the present disclosure, and are not intended to limit the present disclosure.
在相关技术中,可以通过对待检对象进行拍照,采集到待检验对象的面部图像,并通过待检对象的面部图像来对待检对象进行合法身份认证。由于相关技术中的认证技术是基于2D(二维)图像的,无法准确判断待检对象是否为活体。针对这种认证技术,一些不法分子可以通过面具或者三维脸模等工具来模拟合法对象的面部特征,从而伪装成合法对象进行认证,机器在这种情况下会显示认证通过,使得不法分子能够盗取合法对象的账户信息,因此,相关技术中认证技术的准确度较低且安全性较差。In the related art, the face image of the object to be inspected can be collected by taking a picture of the object to be inspected, and the legal identity authentication of the object to be inspected can be performed by the face image of the object to be inspected. Since the authentication technology in the related art is based on 2D (two-dimensional) images, it is impossible to accurately determine whether the object to be examined is a living body. For this authentication technology, some criminals can simulate facial features of legal objects through tools such as masks or three-dimensional face models, thus disguising as legal objects for authentication. In this case, the machine will show that the authentication is passed, allowing the criminals to steal The account information of the legal object is taken, therefore, the authentication technology in the related technology has low accuracy and poor security.
对此,本公开实施例提供一种活体检验方法,以提升进行人脸识别时活体检验准确度。In this regard, the embodiments of the present disclosure provide a biopsy method to improve the accuracy of biopsy during face recognition.
图1是根据一示例性实施例示出的一种用于活体检验的方法流程图。所述方法可以应用于具有面部识别功能的设备,例如移动终端、提款机等电子设备。所述方法包括:Fig. 1 is a flow chart of a method for biopsy according to an exemplary embodiment. The method can be applied to devices with facial recognition functions, such as mobile terminals, cash dispensers, and other electronic devices. The method includes:
S11,电子设备获取待检对象的面部深度图和面部热像图。S11. The electronic device obtains a facial depth map and a facial thermal image of the object to be inspected.
其中,深度图(DepthMap)是包含与视点的待检对象的表面的距离有关的信息的图像或图像通道。深度图的每个像素值是传感器距离物体的实际距离。Among them, the depth map (DepthMap) is an image or image channel containing information about the distance of the surface of the object to be inspected at the viewpoint. Each pixel value of the depth map is the actual distance of the sensor from the object.
热像图也叫红外热图,可以反映物体表面温度分布。The thermal image is also called infrared thermal image, which can reflect the temperature distribution on the surface of the object.
具体的,所述获取待检对象的面部深度图和面部热像图,包括:同时获取待检对象的RGB图、深度图和热像图;从所述RGB图像中获取面部位置坐标;将所述面部位置坐标映射于所述深度图和所述热像图的区域分别作为面部深度图和面部热像图。Specifically, the acquiring of the facial depth map and the thermal image of the subject to be examined includes: simultaneously acquiring the RGB map, the depth map and the thermal image of the subject to be examined; acquiring the facial position coordinates from the RGB image; The areas where the facial position coordinates are mapped to the depth map and the thermal image map are respectively a facial depth map and a facial thermal image map.
在上述过程中,同时获取待检对象的RGB图、深度图和热像图是指,电子设备在同一采集时刻,对待检对象进行图像采集,获取到待检对象的RGB图、深度图和热像图。In the above process, acquiring the RGB image, depth map and thermal image of the object to be inspected at the same time means that the electronic device acquires the image of the object to be inspected at the same acquisition time to obtain the RGB image, depth map and heat Like picture.
在上述过程中,将所述面部位置坐标映射于所述深度图和所述热像图的区域分别作为面部深度图和面部热像图是指,电子设备在所述深度图中,将与所述面部位置坐标对应的区域作为所述面部深度图,在所述热像图中,将与所述面部位置坐标对应的区域作为所述面部热像图。In the above process, mapping the region of the facial position coordinates to the depth map and the thermal image map as the facial depth map and the facial thermal image map respectively means that the electronic device The area corresponding to the facial position coordinates is used as the facial depth map, and in the thermal image, the area corresponding to the facial position coordinates is used as the facial thermal image.
在具体实施时,电子设备可以通过3D摄像头同时获取待检对象的RGB图,深度图和热像图。RGB图像具有较佳的面部可识别性,因此可以通过RGB图像获取待检对象的面部位置坐标。In a specific implementation, the electronic device can simultaneously obtain the RGB image, the depth image, and the thermal image of the object to be inspected through the 3D camera. The RGB image has better facial recognizability, so the facial position coordinates of the object to be inspected can be obtained through the RGB image.
再另一种可能的实施方式中,还可以从视频中获取待检对象的RGB图、深度图和热像图,为保证后期像素坐标位置对正的效果,获取的RGB图、深度图和热像图可以对应于视频文件中的同一帧。In yet another possible implementation manner, the RGB image, the depth image and the thermal image of the object to be inspected can also be obtained from the video. To ensure the effect of the alignment of the pixel coordinates in the later period, the acquired RGB image, depth image and thermal image The image can correspond to the same frame in the video file.
此外,由于上述三种图像是同时获取的,那么这中图像的像素点是对应的关系,这样,有利于三种图像的像素点坐标对齐。In addition, since the above three images are acquired at the same time, the pixels of the images are in a corresponding relationship, which is beneficial to the alignment of the pixel coordinates of the three images.
在具体实施时,可以通过直方图均衡,对上述三种图像进行预处理,增强脸部区域与背景区域的对比度,从而突出对应面部特征的有效信息。这样,可以提升后续特征提取的准确度。In specific implementation, the above three images can be pre-processed by histogram equalization to enhance the contrast between the face area and the background area, thereby highlighting the effective information corresponding to the facial features. In this way, the accuracy of subsequent feature extraction can be improved.
在上述过程中,预处理过程可以包括下述步骤:电子设备通过直方图均衡,增强所述RGB图、所述深度图或所述热像图中一种或多种图像的对比度,所述对比度用于表示面部区域与背景区域之间的反差。In the above process, the preprocessing process may include the following steps: the electronic device enhances the contrast of one or more images in the RGB map, the depth map, or the thermal image map through histogram equalization, and the contrast Used to represent the contrast between the face area and the background area.
S12,电子设备将面部深度图和面部热像图输入特效提取模型,通过特征提取模型,从所述面部深度图和所述面部热像图提取特征信息。S12. The electronic device inputs the facial depth map and the facial thermal image into the special effect extraction model, and extracts feature information from the facial depth map and the facial thermal image through the feature extraction model.
比如说,对应人的鼻子-眼睛的区域,从深度图上可以得到面部最高点、面部最低点、最高点与最低点之间的相对位置关系等面部特征。再比如说,面部的温度分布也有所不同,例如,在冬天,靠近口鼻的区域温度高,靠近脸边缘的区域温度低。For example, corresponding to the nose-eye area of a person, facial features such as the highest point of the face, the lowest point of the face, and the relative position relationship between the highest point and the lowest point can be obtained from the depth map. As another example, the temperature distribution on the face is also different. For example, in winter, the temperature near the mouth and nose is high, and the temperature near the edge of the face is low.
对于活体来说,上述面部特征可以形成一定的规律,可以依据这些规律生成预设的活体面部特征信息。For a living body, the above facial features can form certain laws, and preset living body facial feature information can be generated according to these laws.
再比如说,上述特征信息可以包括特征矩阵,上述特征提取模型可以包括卷积神经网络,在这种情况下,上述S12中提取特征信息时可以包括下述步骤:电子设备融合所述面部深度图和所述面部热像图中位置对应的像素点,得到融合后的面部图像,通过所述卷积神经网络对所述融合后的面部图像进行特征提取,得到所述特征矩阵。For another example, the feature information may include a feature matrix, and the feature extraction model may include a convolutional neural network. In this case, the feature information extraction in S12 may include the following steps: the electronic device fuse the facial depth map Pixels corresponding to the positions in the facial thermal image diagram are used to obtain a fused facial image, and feature extraction is performed on the fused facial image through the convolutional neural network to obtain the feature matrix.
在上述过程中,相当于电子设备通过卷积神经网络提取所述面部深度图和所述面部热像图的特征矩阵。这些特征矩阵可以进一步的输入特征对应的所属类别的判定模型,也就是与所述 预先训练的卷积神经网络配合使用的预设的分类函数模型,因此,上述判定模型也可以称为是“分类模型”,通过将分类模型与卷积神经网络配合使用,能够确定所述待检对象是否属于活体类别。In the above process, it is equivalent to the electronic device extracting the feature matrix of the facial depth map and the facial thermal image through a convolutional neural network. These feature matrices can further input the decision model of the corresponding category corresponding to the feature, that is, the preset classification function model used in conjunction with the pre-trained convolutional neural network. Therefore, the above decision model can also be referred to as "Model", by using a classification model in conjunction with a convolutional neural network, it is possible to determine whether the object to be examined belongs to a living body category.
S13,电子设备根据所述特征信息,以及特征对应的所属类别的判定模型,确定所述待检对象是否属于活体类别。S13. The electronic device determines whether the object to be inspected belongs to the living body category according to the feature information and the judgment model of the category corresponding to the feature.
在S13中,相当于电子设备根据所述特征信息以及分类模型,确定所述待检对象是否属于活体类别。进一步地,在电子设备判断待检对象是否为活体类别时,可以执行下述步骤:将所述特征矩阵输入所述分类模型,得到所述融合后的面部图像属于活体类别的概率值;若所述概率值大于概率阈值,则确定所述待检对象属于活体类别。In S13, it is equivalent to the electronic device determining whether the object to be examined belongs to the living body category based on the feature information and the classification model. Further, when the electronic device determines whether the object to be examined is a living body category, it may perform the following steps: input the feature matrix into the classification model to obtain a probability value that the fused facial image belongs to the living body category; If the probability value is greater than the probability threshold, it is determined that the object to be examined belongs to the living body category.
具体的,特征对应的所属类别的判定模型(也即是分类模型)包括预设的活体面部特征信息,可以将所述特征信息与预设的活体面部特征信息进行比对,如果本次提取得到的特征信息与预设的活体面部特征信息之间的匹配度大于预设阈值,则确定所述待检对象为活体;否则,则确定所述待检对象并非活体。Specifically, the judgment model (that is, the classification model) of the corresponding category corresponding to the feature includes preset living facial feature information, and the feature information can be compared with the preset living facial feature information, if the extraction is obtained this time The degree of matching between the feature information and the preset living body facial feature information is greater than a preset threshold, it is determined that the object to be examined is a living body; otherwise, it is determined that the object to be examined is not a living body.
待检对象为非活体意味者本次待检对象非法,进一步的,在具体进行面部识别相关的操作时,可以禁止所述待检对象访问账户,或者生成报警信息。The object to be inspected is non-living means that the object to be inspected is illegal this time. Further, when the operation related to facial recognition is specifically performed, the object to be inspected may be prohibited from accessing the account or generating an alarm message.
上述技术方案,至少能够达到以下技术效果:The above technical solutions can at least achieve the following technical effects:
通过获取待检对象的面部深度图和面部热像图,并通过预设的特征提取模型,从所述面部深度图和所述面部热像图提取特征信息,再进一步根据所述特征信息,以及特征对应的所属类别的判定模型,确定所述待检对象是否属于活体类别。这样,提升了活体检验的准确度,可以识别出面具或者面部模型伪装成合法者身份来入侵的情况。By obtaining a facial depth map and a facial thermal image of the object to be inspected, and extracting feature information from the facial depth map and the facial thermal image through a preset feature extraction model, and further according to the feature information, and The judgment model of the corresponding category corresponding to the feature determines whether the object to be inspected belongs to the living body category. In this way, the accuracy of the biopsy is improved, and the mask or facial model can be identified as a legitimate person to invade.
此外,与单一通过热像图来进行活体检验相比,上述技术方案提升了活体检验的稳定性和鲁棒性。In addition, the above technical solution improves the stability and robustness of the biopsy compared with the biopsy performed solely by thermal imaging.
与通过拍摄指示待检验对象执行一系列动作指令的视频,来进行面部识别的方案相比,上述技术方案无需待检验对象的特殊配合,提升了用户体验,也提升了面部识别时活体检验操作的执行效率。Compared with the scheme of performing facial recognition by shooting a video instructing the subject to perform a series of motion instructions, the above technical solution does not require special cooperation of the subject to be examined, which improves the user experience and improves the biopsy operation during facial recognition effectiveness.
图2是根据一示例性实施例示出的另一种用于活体检验的方法流程图。所述方法可以应用于具有面部识别功能的设备,例如移动终端,提款机等电子设备。所述方法包括:Fig. 2 is a flow chart of another method for biopsy according to an exemplary embodiment. The method can be applied to devices with facial recognition functions, such as mobile terminals, cash dispensers, and other electronic devices. The method includes:
S21,电子设备获取待检对象的面部深度图和面部热像图。S21. The electronic device obtains a facial depth map and a facial thermal image of the object to be inspected.
其中,深度图(DepthMap)是包含与视点的待检对象的表面的距离有关的信息的图像或图像通道。深度图的每个像素值是传感器距离物体的实际距离。Among them, the depth map (DepthMap) is an image or image channel containing information about the distance of the surface of the object to be inspected at the viewpoint. Each pixel value of the depth map is the actual distance of the sensor from the object.
热像图也叫红外热图,可以反映物体表面温度分布。The thermal image is also called infrared thermal image, which can reflect the temperature distribution on the surface of the object.
具体的,所述获取待检对象的面部深度图和面部热像图,包括:同时获取待检对象的RGB图,深度图和热像图;从所述RGB图像中获取面部位置坐标;将所述面部位置坐标映射于所述深度图和所述热像图的区域分别作为面部深度图和面部热像图。Specifically, the acquiring of the facial depth map and the thermal image of the subject to be examined includes: simultaneously acquiring the RGB map, the depth map and the thermal image of the subject to be examined; acquiring the facial position coordinates from the RGB image; The areas where the facial position coordinates are mapped to the depth map and the thermal image map are respectively a facial depth map and a facial thermal image map.
在具体实施时,是可以通过3D摄像头同时获取待检对象的RGB图,深度图和热像图。RGB图像具有较佳的面部可识别性,因此可以通过RGB图像获取待检对象的面部位置坐标。In specific implementation, it is possible to obtain the RGB image, depth map and thermal image of the object to be inspected at the same time through the 3D camera. The RGB image has better facial recognizability, so the facial position coordinates of the object to be inspected can be obtained through the RGB image.
此外,由于上述三种图像是同时获取的,那么这中图像的像素点是对应的关系,这样,有利于三种图像的像素点坐标对齐。In addition, since the above three images are acquired at the same time, the pixels of the images are in a corresponding relationship, which is beneficial to the alignment of the pixel coordinates of the three images.
在具体实施时,可以通过直方图均衡对图像上述三种图形进行预处理,增强脸部区域与背景区域的对比度,从而突出对应面部特征的有效信息。这样,可以提升后续特征提取的准确度。In specific implementation, the above three graphics of the image can be pre-processed by histogram equalization to enhance the contrast between the face area and the background area, thereby highlighting the effective information corresponding to the facial features. In this way, the accuracy of subsequent feature extraction can be improved.
S22,电子设备融合所述面部深度图和所述面部热像图中位置对应的像素点,得到融合后的面部图像。S22. The electronic device fuses the pixel points corresponding to positions in the facial depth map and the facial thermal image map to obtain a fused facial image.
具体的,所述融合所述面部深度图和所述面部热像图对应的像素点,得到融合后的面部图像,包括:将所述面部深度图中的第一像素点的值,以及所述面部热像图中目标图像通道中第二像素点的值进行加权平均后所得到的值,作为所述融合后的面部图像中与所述第一像素点位置对应的像素点的值,其中,所述第二像素点与所述第一像素点位置相对应,所述目标图像通道是所述面部热像图中的任一图像通道。Specifically, the fusing pixel points corresponding to the face depth map and the face thermal image map to obtain a fused face image includes: combining values of first pixels in the face depth map, and the The value obtained by weighted averaging the value of the second pixel in the target image channel in the facial thermal image is used as the value of the pixel corresponding to the position of the first pixel in the fused facial image, where, The second pixel corresponds to the position of the first pixel, and the target image channel is any image channel in the facial thermal image.
例如,设定A(i,j)为面部深度图中的一个像素点,B(i,j)为面部热像图某一单通道中的一个像素点,则融合图像中的像素点C(i,j)可通过下式得到:For example, if A (i, j) is a pixel in the facial depth map, and B (i, j) is a pixel in a single channel of the facial thermal image, then the fusion pixel C ( i, j) can be obtained by the following formula:
C(i,j)=w A(i,j)A(i,j)+w B(i,j)B(i,j) C (i, j) = w A (i, j) A (i, j) + w B (i, j) B (i, j)
w A(i,j)+w B(i,j)=1 w A (i, j) + w B (i, j) = 1
通过调整对应面部深度图的权重w A(i,j)以及对应面部热像图的权重w B(i,j),可以更加灵活的调整得到的融合图像中深度分布因素和温度分布因素所占的比重,使提取到的面部特征矩阵更加清晰的反应待检验对象的面部特征。 Weight by adjusting the corresponding portion of the depth map weight w A (i, j) and the corresponding facial thermography right of FIG weight fused images w B (i, j), can be more flexibly adjust the resulting depth distribution of elements and temperature distribution factors share The proportion of the face feature matrix makes the extracted facial feature matrix more clearly reflect the facial features of the object to be tested.
融合后的面部图像,最大限度的提取深度图和热像图中的有利信息,提高结果图像的信息包含量,有利于更为准确、更为可靠、更为全面地获取图像特征。The fused facial image can extract the favorable information of the depth map and the thermal image map to the maximum extent, increase the information content of the resulting image, and facilitate more accurate, reliable and comprehensive acquisition of image features.
S23,电子设备通过预先训练的卷积神经网络模型提取所述融合后的面部图像的特征矩阵。S23. The electronic device extracts the feature matrix of the fused facial image through a pre-trained convolutional neural network model.
卷积神经网络是一种多层神经网络,包括卷积层和池化层等。人工神经元可以响应周围单元,将数据量庞大的图像识别问题不断降维,最终使其能够被训练,实现分类,定位,检测等功能及作用。此外,卷积神经网络可以被训练。卷积神经网络的训练方法需要利用链式求导法则对隐含层的节点进行求导,即梯度下降和链式求导的反向传播法则。Convolutional neural network is a multi-layer neural network, including convolutional layer and pooling layer. Artificial neurons can respond to the surrounding units, continuously reduce the dimensionality of image recognition problems with huge amounts of data, and finally enable them to be trained to achieve classification, positioning, detection and other functions and functions. In addition, convolutional neural networks can be trained. The training method of convolutional neural network needs to use the chain derivation rule to differentiate the nodes of the hidden layer, that is, the gradient propagation and chain derivation back propagation rules.
如图3的卷积神经网络示意图所示是本公开实施例提供的一种神经网络的构成示意图,即, 由输入层(input layer)经隐藏层(hidden layer)到输出层(output layer)。其中,隐藏层包括很多不同的层(卷积层,池化层,激活函数层,全连接层等)。The schematic diagram of a convolutional neural network shown in FIG. 3 is a schematic diagram of a structure of a neural network provided by an embodiment of the present disclosure, that is, from an input layer (hidden layer) to an output layer (hidden layer). Among them, the hidden layer includes many different layers (convolutional layer, pooling layer, activation function layer, fully connected layer, etc.).
通过卷积层下采样完成特征提取,生成特征矩阵。请参考如图4所示的本公开实施例提供的运算示意图。The feature extraction is completed by downsampling of the convolutional layer, and a feature matrix is generated. Please refer to the operation schematic diagram provided by the embodiment of the present disclosure shown in FIG. 4.
首先,上述融合后的面部图像可以是图中7*7的输入矩阵,矩阵中的没一像素点为源像素(source pixel)。由一个3*3的过滤窗口(filter),遍历所述输入矩阵,并进行卷积运算,得到卷积运算后输出的值。其中,该过滤窗口又名卷积矩阵(convolution kernel)。First, the above-mentioned fused facial image may be an input matrix of 7 * 7 in the figure, and each pixel in the matrix is a source pixel. A 3 * 3 filter window traverses the input matrix and performs a convolution operation to obtain the value output after the convolution operation. Among them, the filtering window is also called convolution matrix (convolution kernel).
具体的卷积运算过程可以参见如图右上角的竖式所示,运算结果为-8。The specific convolution operation process can be seen in the vertical formula in the upper right corner of the figure, and the operation result is -8.
图4输入矩阵中框选的窗口矩阵中的中心值(值为1的像素点)被卷积运算后的结果所替代,也就是被值为-8的像素点所替代。The center value (pixels with a value of 1) in the window matrix selected in the input matrix in Figure 4 is replaced by the result of the convolution operation, that is, the pixels with a value of -8.
值得说明的是,上述计算过程可能会根据实际需要重复迭代多次,直至生成所需的特征矩阵。It is worth noting that the above calculation process may be iterated repeatedly as many times as necessary until the required feature matrix is generated.
S24,电子设备将所述特征矩阵输入预设的分类的函数模型(也即是分类模型),得到所述融合后的面部图像属于活体类别的概率值。S24. The electronic device inputs the feature matrix into a preset classification function model (that is, a classification model) to obtain a probability value that the fused facial image belongs to a living body category.
在一种可选的实施方式中,所述预设的分类的函数模型(也即是分类模型)可以包括全连接层和Softmax分类函数。具体的,将所述特征矩阵输入Softmax分类函数中,可以将得到多个神经元的输出量映射到0~1的区间内。也就是说,融合图像通过卷积神经网络提取特征矩阵后输入到Softmax层,最终输出对应每个类别的概率值。In an optional embodiment, the preset classification function model (that is, the classification model) may include a fully connected layer and a Softmax classification function. Specifically, by inputting the feature matrix into the Softmax classification function, the output of the obtained multiple neurons can be mapped into the interval of 0 to 1. That is to say, the fused image is extracted through the convolutional neural network and input to the Softmax layer, and finally the probability value corresponding to each category is output.
具体的,在将特征矩阵输入Softmax分类函数之前,还可以对所述图像特征矩阵做全连接层变换,也即是说,电子设备将特征矩阵输入全连接层,输出多维特征向量,其中,所述多维特征向量的维度数目对应于所述Softmax分类函数的类别数目。Specifically, before inputting the feature matrix into the Softmax classification function, the image feature matrix can also be fully connected layer transformed, that is to say, the electronic device inputs the feature matrix into the fully connected layer and outputs a multi-dimensional feature vector. The number of dimensions of the multi-dimensional feature vector corresponds to the number of categories of the Softmax classification function.
经过上文示例的卷积运算后,神经元的输出可用如下公式表达:After the convolution operation in the above example, the output of the neuron can be expressed by the following formula:
Figure PCTCN2019100261-appb-000004
Figure PCTCN2019100261-appb-000004
其中,x ij是第i个神经元的第j个输入值;w ij是第i个神经元的的第j个权重,b是偏移值,z i表示该网络的第i个输出,也即所述多维特征向量中第i个值。 Where x ij is the j-th input value of the i-th neuron; w ij is the j-th weight of the i-th neuron, b is the offset value, and z i represents the i-th output of the network, and That is, the i-th value in the multi-dimensional feature vector.
进一步的,电子设备可以根据如下公式确定得到所述融合后的面部图像属于活体类别的概率值:Further, the electronic device may determine the probability value that the fused facial image belongs to the living body category according to the following formula:
Figure PCTCN2019100261-appb-000005
Figure PCTCN2019100261-appb-000005
其中,a i代表Softmax的第i个类别的概率值,z i为所述多维特征向量中的第i个值,e为 常数。 Where a i represents the probability value of the i-th category of Softmax, z i is the i-th value in the multi-dimensional feature vector, and e is a constant.
举例来说,Softmax分类函数可以具有两个类别,第一个类别为“非活体”,对应的概率值为a 1;第二个类别为“活体”,对应的概率值为a 2。那么通过上述概率计算公式,即可得到所述融合后的面部图像属于活体类别的概率值a 2For example, the Softmax classification function can have two categories, the first category is "non-living" and the corresponding probability value is a 1 ; the second category is "living" and the corresponding probability value is a 2 . Then, through the above probability calculation formula, the probability value a 2 that the fused facial image belongs to the living body category can be obtained.
S25,电子设备若所述概率值大于预设的概率阈值,则确定所述待检对象属于活体类别。S25. If the probability value is greater than a preset probability threshold, the electronic device determines that the object to be detected belongs to a living body category.
例如,预设的概率阈值为0.5若属于活体类别的概率值P大于0.5,可确定待检验对象是活体;反之,属于活体类别的概率值P不大于0.5,则确定待检验对象是非活体。For example, the preset probability threshold is 0.5. If the probability value P belonging to the living body category is greater than 0.5, it can be determined that the object to be tested is a living body; otherwise, the probability value P belonging to the living body category is not greater than 0.5, it is determined that the object to be tested is a non-living body.
待检对象为非活体意味者本次待检对象非法,进一步的,在具体进行面部识别相关的操作时,可以禁止所述待检对象访问账户,或者生成报警信息。The object to be inspected is non-living means that the object to be inspected is illegal this time. Further, when the operation related to facial recognition is specifically performed, the object to be inspected may be prohibited from accessing the account or generating an alarm message.
上述技术方案,至少能够达到以下技术效果:The above technical solutions can at least achieve the following technical effects:
通过获取待检对象的面部深度图和面部热像图,并通过预设的特征提取模型,从所述面部深度图和所述面部热像图提取特征信息,再进一步根据所述特征信息,以及特征对应的所属类别的判定模型,确定所述待检对象是否属于活体类别。这样,提升了活体检验的准确度,可以识别出面具或者面部模型伪装成合法者身份来入侵的情况。By obtaining a facial depth map and a facial thermal image of the object to be inspected, and extracting feature information from the facial depth map and the facial thermal image through a preset feature extraction model, and further according to the feature information, and The judgment model of the corresponding category corresponding to the feature determines whether the object to be inspected belongs to the living body category. In this way, the accuracy of the biopsy is improved, and the mask or facial model can be identified as a legitimate person to invade.
此外,与单一通过热像图来进行活体检验相比,上述技术方案提升了活体检验的稳定性和鲁棒性。In addition, the above technical solution improves the stability and robustness of the biopsy compared with the biopsy performed solely by thermal imaging.
与通过拍摄指示待检验对象执行一系列动作指令的视频,来进行面部识别的方案相比,上述技术方案无需待检验对象的特殊配合,提升了用户体验,也提升了面部识别时活体检验操作的执行效率。Compared with the scheme of performing facial recognition by shooting a video instructing the subject to perform a series of motion instructions, the above technical solution does not require special cooperation of the subject to be examined, which improves the user experience and improves the biopsy operation during facial recognition effectiveness.
值得说明的是,对于上述方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明所必须的。It is worth noting that the above method embodiments are described as a series of action combinations for the sake of simple description, but those skilled in the art should know that the present invention is not limited by the described action sequence. Secondly, those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the actions involved are not necessarily required by the present invention.
图5是根据一示例性实施例示出的一种用于活体检验的装置框图。所述装置可以应用于具有面部识别功能的设备,例如移动终端,提款机等电子设备。所述装置包括:Fig. 5 is a block diagram of a device for biopsy according to an exemplary embodiment. The device can be applied to devices with facial recognition functions, such as mobile terminals, cash dispensers and other electronic devices. The device includes:
获取模块310,用于获取待检对象的面部深度图和面部热像图;The obtaining module 310 is used to obtain a facial depth map and a thermal image of the subject to be inspected;
特征提取模块320,用于通过预设的特征提取模型,从所述面部深度图和所述面部热像图提取特征信息;A feature extraction module 320, configured to extract feature information from the facial depth map and the facial thermal image through a preset feature extraction model;
确定模块330,用于根据所述特征信息,以及特征对应的所属类别的判定模型,确定所述待检对象是否属于活体类别。The determining module 330 is configured to determine whether the object to be examined belongs to the living body category according to the feature information and the determination model of the corresponding category corresponding to the feature.
上述技术方案,至少能够达到以下技术效果:The above technical solutions can at least achieve the following technical effects:
通过获取待检对象的面部深度图和面部热像图,并通过预设的特征提取模型,从所述面部深度图和所述面部热像图提取特征信息,再进一步根据所述特征信息,以及特征对应的所属类别的判定模型,确定所述待检对象是否属于活体类别。这样,提升了活体检验的准确度,可以识别出面具或者面部模型伪装成合法者身份来入侵的情况。By obtaining a facial depth map and a facial thermal image of the object to be inspected, and extracting feature information from the facial depth map and the facial thermal image through a preset feature extraction model, and further according to the feature information, and The judgment model of the corresponding category corresponding to the feature determines whether the object to be inspected belongs to the living body category. In this way, the accuracy of the biopsy is improved, and the mask or facial model can be identified as a legitimate person to invade.
此外,与单一通过热像图来进行活体检验相比,上述技术方案提升了活体检验的稳定性和鲁棒性。In addition, the above technical solution improves the stability and robustness of the biopsy compared with the biopsy performed solely by thermal imaging.
与通过拍摄指示待检验对象执行一系列动作指令的视频,来进行面部识别的方案相比,上述技术方案无需待检验对象的特殊配合,提升了用户体验,也提升了面部识别时活体检验操作的执行效率。Compared with the scheme of performing facial recognition by shooting a video instructing the subject to perform a series of motion instructions, the above technical solution does not require special cooperation of the subject to be examined, which improves the user experience and improves the biopsy operation during facial recognition effectiveness.
可选的,所述获取模块,用于:Optionally, the acquisition module is used to:
同时获取待检对象的RGB图,深度图和热像图;Obtain the RGB image, depth image and thermal image of the object to be inspected at the same time;
从所述RGB图像中获取面部位置坐标;Acquiring facial position coordinates from the RGB image;
将所述面部位置坐标映射于所述深度图和所述热像图的区域分别作为面部深度图和面部热像图。The regions where the facial position coordinates are mapped to the depth map and the thermal image map are used as the facial depth map and the facial thermal image map, respectively.
可选的,所述获取模块,用于:Optionally, the acquisition module is used to:
通过直方图均衡增强所述RGB图、所述深度图和所述热像图中一种或Enhance one of the RGB image, the depth image and the thermal image image by histogram equalization or
多种图像中面部区域与背景区域的对比度。Contrast between face area and background area in various images.
可选的,所述特征提取模块,用于融合所述面部深度图和所述面部热像图对应的像素点,得到融合后的面部图像;通过预先训练的卷积神经网络模型提取所述融合后的面部图像的特征矩阵;Optionally, the feature extraction module is used to fuse pixel points corresponding to the facial depth map and the facial thermal image to obtain a fused facial image; extract the fusion through a pre-trained convolutional neural network model The feature matrix of the facial image;
所述确定模块,用于将所述图像特征矩阵输入预设的分类的函数模型,得到所述融合后的面部图像属于活体类的概率值;若所述概率值大于预设的概率阈值,则确定所述待检对象为活体。The determining module is configured to input the image feature matrix into a preset classification function model to obtain a probability value that the fused facial image belongs to a living body class; if the probability value is greater than a preset probability threshold, then It is determined that the object to be examined is a living body.
可选的,所述预设的分类的函数模型为Softmax分类函数;Optionally, the preset classification function model is a Softmax classification function;
所述确定模块,用于:The determination module is used to:
对所述图像特征矩阵做全连接层变换,得到输出的多维特征向量,其中,所述多维特征向量的维度数目对应所述Softmax分类函数的类别数目;Performing a fully connected layer transformation on the image feature matrix to obtain an output multi-dimensional feature vector, where the number of dimensions of the multi-dimensional feature vector corresponds to the number of categories of the Softmax classification function;
根据如下公式确定得到所述融合后的面部图像属于活体类的概率值:The probability value that the fused facial image belongs to the living body class is determined according to the following formula:
Figure PCTCN2019100261-appb-000006
Figure PCTCN2019100261-appb-000006
其中,a i代表Softmax的第i个类别的概率值,z i为所述多维特征向量中的第i个值,e为 常数。 Where a i represents the probability value of the i-th category of Softmax, z i is the i-th value in the multi-dimensional feature vector, and e is a constant.
可选的,所述特征提取模块,用于将所述面部深度图中的第一像素点的值,以及所述面部热像图中目标图像通道中与所述第一像素点对应的第二像素点的值加权平均得到的值,作为所述融合后的面部图像中与所述第一像素点对应的像素点的值,其中,所述目标图像通道是所述面部热像图中的任一图像通道。Optionally, the feature extraction module is configured to compare the value of the first pixel in the facial depth map and the second corresponding to the first pixel in the target image channel in the facial thermal image The value obtained by weighted averaging of pixel values is used as the value of the pixel corresponding to the first pixel in the fused facial image, wherein the target image channel is any image in the facial thermal image One image channel.
本领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能单元(模块)的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元(模块)完成,即将装置的内部结构划分成不同的功能单元(模块),以完成以上描述的全部或者部分功能。上述描述功能单元(模块)的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and conciseness of description, only the above-mentioned division of each functional unit (module) is used as an example for illustration. In practical applications, the above-mentioned functions can be allocated by different functional units as needed (Module) is completed, that is, the internal structure of the device is divided into different functional units (modules) to complete all or part of the functions described above. For the specific working processes of the functional units (modules) described above, reference may be made to the corresponding processes in the foregoing method embodiments, which will not be repeated here.
本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现下述活体检验方法所执行的步骤:An embodiment of the present disclosure provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the steps performed by the following biopsy method:
获取待检对象的面部深度图和面部热像图;Obtain the facial depth map and facial thermal image of the subject to be inspected;
将所述面部深度图和面部热像图输入特征提取模型,通过所述特征提取模型,从所述面部深度图和所述面部热像图提取特征信息;Input the facial depth map and the facial thermal image into a feature extraction model, and extract feature information from the facial depth map and the facial thermal image through the feature extraction model;
根据所述特征信息以及分类模型,确定所述待检对象是否属于活体类别。According to the feature information and the classification model, it is determined whether the object to be examined belongs to a living body category.
可选的,所述程序被处理器执行时用于实现下述步骤:Optionally, the program is used to implement the following steps when executed by the processor:
在同一采集时刻,获取待检对象的RGB图、深度图和热像图;At the same acquisition time, obtain the RGB image, depth map and thermal image of the object to be inspected;
从所述RGB图像中获取面部位置坐标;Acquiring facial position coordinates from the RGB image;
在所述深度图中,将与所述面部位置坐标对应的区域作为所述面部深度图;In the depth map, an area corresponding to the face position coordinates is used as the face depth map;
在所述热像图中,将与所述面部位置坐标对应的区域作为所述面部热像图。In the thermal image, an area corresponding to the facial position coordinates is used as the facial thermal image.
可选的,所述程序被处理器执行时用于实现下述步骤:Optionally, the program is used to implement the following steps when executed by the processor:
通过直方图均衡,增强所述RGB图、所述深度图或所述热像图中一种或多种图像的对比度,所述对比度用于表示面部区域与背景区域之间的反差。Through histogram equalization, the contrast of one or more images in the RGB image, the depth image or the thermal image image is enhanced, and the contrast is used to represent the contrast between the face area and the background area.
可选的,所述特征信息包括特征矩阵,所述特征提取模型包括卷积神经网络;Optionally, the feature information includes a feature matrix, and the feature extraction model includes a convolutional neural network;
所述程序被处理器执行时用于实现下述步骤:When the program is executed by the processor, the following steps are implemented:
融合所述面部深度图和所述面部热像图中位置对应的像素点,得到融合后的面部图像;Fusing pixel points corresponding to positions in the facial depth map and the facial thermal image map to obtain a fused facial image;
通过所述卷积神经网络对所述融合后的面部图像进行特征提取,得到所述特征矩阵。Feature extraction is performed on the fused facial image through the convolutional neural network to obtain the feature matrix.
可选的,所述程序被处理器执行时用于实现下述步骤:Optionally, the program is used to implement the following steps when executed by the processor:
将所述特征矩阵输入所述分类模型,得到所述融合后的面部图像属于活体类别的概率值;Input the feature matrix into the classification model to obtain the probability value that the fused facial image belongs to the living body category;
若所述概率值大于概率阈值,则确定所述待检对象属于活体类别。If the probability value is greater than the probability threshold, it is determined that the object to be examined belongs to the living body category.
可选的,所述分类模型包括全连接层和Softmax分类函数;Optionally, the classification model includes a fully connected layer and a Softmax classification function;
所述程序被处理器执行时用于实现下述步骤:When the program is executed by the processor, the following steps are implemented:
对所述特征矩阵输入所述全连接层,输出多维特征向量,其中,所述多维特征向量的维度数目对应于所述Softmax分类函数的类别数目;Input the fully connected layer to the feature matrix and output a multi-dimensional feature vector, wherein the number of dimensions of the multi-dimensional feature vector corresponds to the number of categories of the Softmax classification function;
根据如下公式得到所述融合后的面部图像属于活体类别的概率值:The probability value that the fused facial image belongs to the living body category is obtained according to the following formula:
Figure PCTCN2019100261-appb-000007
Figure PCTCN2019100261-appb-000007
其中,a i代表Softmax分类函数的第i个类别的概率值,z i为所述多维特征向量中的第i个值,e为常数。 Where a i represents the probability value of the i-th category of the Softmax classification function, z i is the i-th value in the multi-dimensional feature vector, and e is a constant.
可选的,所述程序被处理器执行时用于实现下述步骤:Optionally, the program is used to implement the following steps when executed by the processor:
将所述面部深度图中的第一像素点的值,以及所述面部热像图中目标图像通道中第二像素点的值进行加权平均后所得到的值,作为所述融合后的面部图像中与所述第一像素点位置对应的像素点的值,其中,所述第二像素点与所述第一像素点位置相对应,所述目标图像通道是所述面部热像图中的任一图像通道。The value obtained by weighting and averaging the value of the first pixel in the facial depth map and the value of the second pixel in the target image channel in the facial thermal image is used as the fused facial image The value of the pixel corresponding to the position of the first pixel in, wherein the second pixel corresponds to the position of the first pixel, and the target image channel is any image in the facial thermal image One image channel.
本公开实施例提供一种电子设备,包括:An embodiment of the present disclosure provides an electronic device, including:
存储器,其上存储有计算机程序;Memory, on which computer programs are stored;
处理器,用于执行所述存储器中的所述计算机程序,以实现下述活体检验方法所执行的步骤:The processor is configured to execute the computer program in the memory to implement the steps performed by the following biopsy method:
获取待检对象的面部深度图和面部热像图;Obtain the facial depth map and facial thermal image of the subject to be inspected;
将所述面部深度图和面部热像图输入特征提取模型,通过所述特征提取模型,从所述面部深度图和所述面部热像图提取特征信息;Input the facial depth map and the facial thermal image into a feature extraction model, and extract feature information from the facial depth map and the facial thermal image through the feature extraction model;
根据所述特征信息以及分类模型,确定所述待检对象是否属于活体类别。According to the feature information and the classification model, it is determined whether the object to be examined belongs to a living body category.
可选的,所述处理器用于执行下述步骤:Optionally, the processor is used to perform the following steps:
在同一采集时刻,获取待检对象的RGB图、深度图和热像图;At the same acquisition time, obtain the RGB image, depth map and thermal image of the object to be inspected;
从所述RGB图像中获取面部位置坐标;Acquiring facial position coordinates from the RGB image;
在所述深度图中,将与所述面部位置坐标对应的区域作为所述面部深度图;In the depth map, an area corresponding to the face position coordinates is used as the face depth map;
在所述热像图中,将与所述面部位置坐标对应的区域作为所述面部热像图。In the thermal image, an area corresponding to the facial position coordinates is used as the facial thermal image.
可选的,所述处理器用于执行下述步骤:Optionally, the processor is used to perform the following steps:
通过直方图均衡,增强所述RGB图、所述深度图或所述热像图中一种或多种图像的对比度,所述对比度用于表示面部区域与背景区域之间的反差。Through histogram equalization, the contrast of one or more images in the RGB image, the depth image or the thermal image image is enhanced, and the contrast is used to represent the contrast between the face area and the background area.
可选的,所述特征信息包括特征矩阵,所述特征提取模型包括卷积神经网络;Optionally, the feature information includes a feature matrix, and the feature extraction model includes a convolutional neural network;
所述处理器用于执行下述步骤:The processor is used to perform the following steps:
融合所述面部深度图和所述面部热像图中位置对应的像素点,得到融合后的面部图像;Fusing pixel points corresponding to positions in the facial depth map and the facial thermal image map to obtain a fused facial image;
通过所述卷积神经网络对所述融合后的面部图像进行特征提取,得到所述特征矩阵。Feature extraction is performed on the fused facial image through the convolutional neural network to obtain the feature matrix.
可选的,所述处理器用于执行下述步骤:Optionally, the processor is used to perform the following steps:
将所述特征矩阵输入所述分类模型,得到所述融合后的面部图像属于活体类别的概率值;Input the feature matrix into the classification model to obtain the probability value that the fused facial image belongs to the living body category;
若所述概率值大于概率阈值,则确定所述待检对象属于活体类别。If the probability value is greater than the probability threshold, it is determined that the object to be examined belongs to the living body category.
可选的,所述分类模型包括全连接层和Softmax分类函数;Optionally, the classification model includes a fully connected layer and a Softmax classification function;
所述处理器用于执行下述步骤:The processor is used to perform the following steps:
对所述特征矩阵输入所述全连接层,输出多维特征向量,其中,所述多维特征向量的维度数目对应于所述Softmax分类函数的类别数目;Input the fully connected layer to the feature matrix and output a multi-dimensional feature vector, wherein the number of dimensions of the multi-dimensional feature vector corresponds to the number of categories of the Softmax classification function;
根据如下公式得到所述融合后的面部图像属于活体类别的概率值:The probability value that the fused facial image belongs to the living body category is obtained according to the following formula:
Figure PCTCN2019100261-appb-000008
Figure PCTCN2019100261-appb-000008
其中,a i代表Softmax分类函数的第i个类别的概率值,z i为所述多维特征向量中的第i个值,e为常数。 Where a i represents the probability value of the i-th category of the Softmax classification function, z i is the i-th value in the multi-dimensional feature vector, and e is a constant.
可选的,所述处理器用于执行下述步骤:Optionally, the processor is used to perform the following steps:
将所述面部深度图中的第一像素点的值,以及所述面部热像图中目标图像通道中第二像素点的值进行加权平均后所得到的值,作为所述融合后的面部图像中与所述第一像素点位置对应的像素点的值,其中,所述第二像素点与所述第一像素点位置相对应,所述目标图像通道是所述面部热像图中的任一图像通道。The value obtained by weighting and averaging the value of the first pixel in the facial depth map and the value of the second pixel in the target image channel in the facial thermal image is used as the fused facial image The value of the pixel corresponding to the position of the first pixel in, wherein the second pixel corresponds to the position of the first pixel, and the target image channel is any image in the facial thermal image One image channel.
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。Other features and advantages of the present disclosure will be described in detail in the detailed description section that follows.
图6是根据一示例性实施例示出的一种电子设备框图。如图6所示,该电子设备400可以包括:处理器401,存储器402。该电子设备400还可以包括多媒体组件403,输入/输出(I/O)接口404,以及通信组件405中的一者或多者。Fig. 6 is a block diagram of an electronic device according to an exemplary embodiment. As shown in FIG. 6, the electronic device 400 may include: a processor 401 and a memory 402. The electronic device 400 may also include one or more of a multimedia component 403, an input / output (I / O) interface 404, and a communication component 405.
其中,处理器401用于控制该电子设备400的整体操作,以完成上述的用于活体检验的方法中的全部或部分步骤。存储器402用于存储各种类型的数据以支持在该电子设备400的操作,这些数据例如可以包括用于在该电子设备400上操作的任何应用程序或方法的指令,以及应用程序相关的数据,例如预先训练的卷积神经网络模型,待检测对象的热像图和深度图数据,此外,还可以包括合法用户的身份数据、收发的消息、音频、视频等等。该存储器402可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,例如静态随机存取存储器(Static Random Access Memory,简称SRAM),电可擦除可编程只读存储器(Electrically Erasable  Programmable Read-Only Memory,简称EEPROM),可擦除可编程只读存储器(Erasable Programmable Read-Only Memory,简称EPROM),可编程只读存储器(Programmable Read-Only Memory,简称PROM),只读存储器(Read-Only Memory,简称ROM),磁存储器,快闪存储器,磁盘或光盘。多媒体组件403可以包括屏幕和音频组件。其中屏幕例如可以是触摸屏,音频组件用于输出和/或输入音频信号。例如,音频组件可以包括一个麦克风,麦克风用于接收外部音频信号。所接收的音频信号可以被进一步存储在存储器402或通过通信组件405发送。音频组件还包括至少一个扬声器,用于输出音频信号。I/O接口404为处理器401和其他接口模块之间提供接口,上述其他接口模块可以是键盘,鼠标,按钮等。这些按钮可以是虚拟按钮或者实体按钮。通信组件405用于该电子设备400与其他设备之间进行有线或无线通信。无线通信,例如Wi-Fi,蓝牙,近场通信(Near Field Communication,简称NFC),2G、3G或4G,或它们中的一种或几种的组合,因此相应的该通信组件405可以包括:Wi-Fi模块,蓝牙模块,NFC模块。The processor 401 is used to control the overall operation of the electronic device 400 to complete all or part of the steps in the method for biopsy. The memory 402 is used to store various types of data to support operation on the electronic device 400, and the data may include, for example, instructions for any application or method for operating on the electronic device 400, and application-related data, For example, the pre-trained convolutional neural network model, the thermal image and depth map data of the object to be detected, in addition, it can also include the identity data of legitimate users, messages sent and received, audio, video, and so on. The memory 402 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (Static Random Access Memory, SRAM for short), electrically erasable programmable read-only memory ( Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), read-only Memory (Read-Only Memory, ROM for short), magnetic memory, flash memory, magnetic disk or optical disk. The multimedia component 403 may include a screen and an audio component. The screen may be, for example, a touch screen, and the audio component is used to output and / or input audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may be further stored in the memory 402 or transmitted through the communication component 405. The audio component also includes at least one speaker for outputting audio signals. The I / O interface 404 provides an interface between the processor 401 and other interface modules. The other interface modules may be a keyboard, a mouse, a button, and so on. These buttons can be virtual buttons or physical buttons. The communication component 405 is used for wired or wireless communication between the electronic device 400 and other devices. Wireless communication, such as Wi-Fi, Bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so the corresponding communication component 405 may include: Wi-Fi module, Bluetooth module, NFC module.
在一示例性实施例中,电子设备400可以被一个或多个应用专用集成电路(Application Specific Integrated Circuit,简称ASIC)、数字信号处理器(Digital Signal Processor,简称DSP)、数字信号处理设备(Digital Signal Processing Device,简称DSPD)、可编程逻辑器件(Programmable Logic Device,简称PLD)、现场可编程门阵列(Field Programmable Gate Array,简称FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述的用于活体检验的方法。In an exemplary embodiment, the electronic device 400 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit (ASIC), digital signal processor (DSP), digital signal processing device (Digital Signal Processing (Device DSP), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor or other electronic components Implementation, for performing the method for biopsy described above.
在另一示例性实施例中,还提供了一种包括程序指令的计算机可读存储介质,该程序指令被处理器执行时实现上述的用于活体检验的方法的步骤。例如,该计算机可读存储介质可以为上述包括程序指令的存储器402,上述程序指令可由电子设备400的处理器401执行以完成上述的用于活体检验的方法。In another exemplary embodiment, a computer-readable storage medium including program instructions is also provided, which when executed by a processor implements the steps of the method for biopsy described above. For example, the computer-readable storage medium may be the above-mentioned memory 402 including program instructions, which may be executed by the processor 401 of the electronic device 400 to complete the above-described method for biopsy.
以上结合附图详细描述了本公开的优选实施方式,但是,本公开并不限于上述实施方式中的具体细节,在本公开的技术构思范围内,可以对本公开的技术方案进行多种简单变型,这些简单变型均属于本公开的保护范围。另外需要说明的是,在上述具体实施方式中所描述的各个具体技术特征,在不矛盾的情况下,可以通过任何合适的方式进行组合,为了避免不必要的重复,本公开对各种可能的组合方式不再另行说明。The preferred embodiments of the present disclosure have been described in detail above with reference to the drawings. However, the present disclosure is not limited to the specific details in the above embodiments. Within the scope of the technical concept of the present disclosure, various simple modifications can be made to the technical solutions of the present disclosure. These simple modifications all fall within the protection scope of the present disclosure. In addition, it should be noted that the specific technical features described in the above specific embodiments can be combined in any suitable manner without contradictions. In order to avoid unnecessary repetition, the present disclosure The combination method will not be explained separately.
此外,本公开的各种不同的实施方式之间也可以进行任意组合,只要其不违背本公开的思想,其同样应当视为本公开所公开的内容。In addition, various combinations of various embodiments of the present disclosure can also be arbitrarily combined, as long as it does not violate the idea of the present disclosure, it should also be regarded as what is disclosed in the present disclosure.

Claims (20)

  1. 一种活体检验方法,其特征在于,所述方法包括:A biopsy method, characterized in that the method includes:
    获取待检对象的面部深度图和面部热像图;Obtain the facial depth map and facial thermal image of the subject to be inspected;
    将所述面部深度图和面部热像图输入特征提取模型,通过所述特征提取模型,从所述面部深度图和所述面部热像图提取特征信息;Input the facial depth map and the facial thermal image into a feature extraction model, and extract feature information from the facial depth map and the facial thermal image through the feature extraction model;
    根据所述特征信息以及分类模型,确定所述待检对象是否属于活体类别。According to the feature information and the classification model, it is determined whether the object to be examined belongs to a living body category.
  2. 根据权利要求1所述的方法,其特征在于,所述获取待检对象的面部深度图和面部热像图,包括:The method according to claim 1, wherein the acquiring of the facial depth map and the facial thermal image of the object to be inspected comprises:
    在同一采集时刻,获取待检对象的RGB图、深度图和热像图;At the same acquisition time, obtain the RGB image, depth map and thermal image of the object to be inspected;
    从所述RGB图像中获取面部位置坐标;Acquiring facial position coordinates from the RGB image;
    在所述深度图中,将与所述面部位置坐标对应的区域作为所述面部深度图;In the depth map, an area corresponding to the face position coordinates is used as the face depth map;
    在所述热像图中,将与所述面部位置坐标对应的区域作为所述面部热像图。In the thermal image, an area corresponding to the facial position coordinates is used as the facial thermal image.
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:The method according to claim 2, wherein the method further comprises:
    通过直方图均衡,增强所述RGB图、所述深度图或所述热像图中一种或多种图像的对比度,所述对比度用于表示面部区域与背景区域之间的反差。Through histogram equalization, the contrast of one or more images in the RGB image, the depth image or the thermal image image is enhanced, and the contrast is used to represent the contrast between the face area and the background area.
  4. 根据权利要求1所述的方法,其特征在于,所述特征信息包括特征矩阵,所述特征提取模型包括卷积神经网络;The method according to claim 1, wherein the feature information includes a feature matrix, and the feature extraction model includes a convolutional neural network;
    所述通过所述特征提取模型,从所述面部深度图和所述面部热像图提取特征信息包括:The extracting feature information from the facial depth map and the facial thermal image through the feature extraction model includes:
    融合所述面部深度图和所述面部热像图中位置对应的像素点,得到融合后的面部图像;Fusing pixel points corresponding to positions in the facial depth map and the facial thermal image map to obtain a fused facial image;
    通过所述卷积神经网络对所述融合后的面部图像进行特征提取,得到所述特征矩阵。Feature extraction is performed on the fused facial image through the convolutional neural network to obtain the feature matrix.
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述特征信息以及分类模型,确定所述待检对象是否属于活体类别包括:The method according to claim 4, wherein the determining whether the subject to be examined belongs to the living body category according to the feature information and the classification model includes:
    将所述特征矩阵输入所述分类模型,得到所述融合后的面部图像属于活体类别的概率值;Input the feature matrix into the classification model to obtain the probability value that the fused facial image belongs to the living body category;
    若所述概率值大于概率阈值,则确定所述待检对象属于活体类别。If the probability value is greater than the probability threshold, it is determined that the object to be examined belongs to the living body category.
  6. 根据权利要求5所述的方法,其特征在于,所述分类模型包括全连接层和Softmax分类 函数;The method according to claim 5, wherein the classification model includes a fully connected layer and a Softmax classification function;
    所述将所述特征矩阵输入所述分类模型,得到所述融合后的面部图像属于活体类别的概率值包括:The inputting the feature matrix into the classification model to obtain the probability value that the fused facial image belongs to the living body category includes:
    对所述特征矩阵输入所述全连接层,输出多维特征向量,其中,所述多维特征向量的维度数目对应于所述Softmax分类函数的类别数目;Input the fully connected layer to the feature matrix and output a multi-dimensional feature vector, wherein the number of dimensions of the multi-dimensional feature vector corresponds to the number of categories of the Softmax classification function;
    根据如下公式得到所述融合后的面部图像属于活体类别的概率值:The probability value that the fused facial image belongs to the living body category is obtained according to the following formula:
    Figure PCTCN2019100261-appb-100001
    Figure PCTCN2019100261-appb-100001
    其中,a i代表Softmax分类函数的第i个类别的概率值,z i为所述多维特征向量中的第i个值。 Where a i represents the probability value of the i-th category of the Softmax classification function, and z i is the i-th value in the multi-dimensional feature vector.
  7. 根据权利要求4所述的方法,其特征在于,所述融合所述面部深度图和所述面部热像图中位置对应的像素点,得到融合后的面部图像包括:The method according to claim 4, wherein the fusing the pixel points corresponding to the positions of the facial depth map and the facial thermal image map to obtain the fused facial image includes:
    将所述面部深度图中的第一像素点的值,以及所述面部热像图中目标图像通道中第二像素点的值进行加权平均后所得到的值,作为所述融合后的面部图像中与所述第一像素点位置对应的像素点的值,其中,所述第二像素点与所述第一像素点位置相对应,所述目标图像通道是所述面部热像图中的任一图像通道。The value obtained by weighting and averaging the value of the first pixel in the facial depth map and the value of the second pixel in the target image channel in the facial thermal image is used as the fused facial image The value of the pixel corresponding to the position of the first pixel in, wherein the second pixel corresponds to the position of the first pixel, and the target image channel is any image in the facial thermal image One image channel.
  8. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现下述活体检验方法所执行的步骤:A computer-readable storage medium on which a computer program is stored, characterized in that when the program is executed by a processor, the following steps performed by the biopsy method are implemented:
    获取待检对象的面部深度图和面部热像图;Obtain the facial depth map and facial thermal image of the subject to be inspected;
    将所述面部深度图和面部热像图输入特征提取模型,通过所述特征提取模型,从所述面部深度图和所述面部热像图提取特征信息;Input the facial depth map and the facial thermal image into a feature extraction model, and extract feature information from the facial depth map and the facial thermal image through the feature extraction model;
    根据所述特征信息以及分类模型,确定所述待检对象是否属于活体类别。According to the feature information and the classification model, it is determined whether the object to be examined belongs to a living body category.
  9. 根据权利要求8所述的存储介质,其特征在于,所述程序被处理器执行时用于实现下述步骤:The storage medium according to claim 8, wherein the program is used to implement the following steps when executed by the processor:
    在同一采集时刻,获取待检对象的RGB图、深度图和热像图;At the same acquisition time, obtain the RGB image, depth map and thermal image of the object to be inspected;
    从所述RGB图像中获取面部位置坐标;Acquiring facial position coordinates from the RGB image;
    在所述深度图中,将与所述面部位置坐标对应的区域作为所述面部深度图;In the depth map, an area corresponding to the face position coordinates is used as the face depth map;
    在所述热像图中,将与所述面部位置坐标对应的区域作为所述面部热像图。In the thermal image, an area corresponding to the facial position coordinates is used as the facial thermal image.
  10. 根据权利要求9所述的存储介质,其特征在于,所述程序被处理器执行时用于实现下述步骤:The storage medium according to claim 9, wherein the program is used to implement the following steps when executed by the processor:
    通过直方图均衡,增强所述RGB图、所述深度图或所述热像图中一种或多种图像的对比度,所述对比度用于表示面部区域与背景区域之间的反差。Through histogram equalization, the contrast of one or more images in the RGB image, the depth image or the thermal image image is enhanced, and the contrast is used to represent the contrast between the face area and the background area.
  11. 根据权利要求8所述的存储介质,其特征在于,所述特征信息包括特征矩阵,所述特征提取模型包括卷积神经网络;The storage medium according to claim 8, wherein the feature information includes a feature matrix, and the feature extraction model includes a convolutional neural network;
    所述程序被处理器执行时用于实现下述步骤:When the program is executed by the processor, the following steps are implemented:
    融合所述面部深度图和所述面部热像图中位置对应的像素点,得到融合后的面部图像;Fusing pixel points corresponding to positions in the facial depth map and the facial thermal image map to obtain a fused facial image;
    通过所述卷积神经网络对所述融合后的面部图像进行特征提取,得到所述特征矩阵。Feature extraction is performed on the fused facial image through the convolutional neural network to obtain the feature matrix.
  12. 根据权利要求11所述的存储介质,其特征在于,所述程序被处理器执行时用于实现下述步骤:The storage medium according to claim 11, wherein the program is used to implement the following steps when executed by the processor:
    将所述特征矩阵输入所述分类模型,得到所述融合后的面部图像属于活体类别的概率值;Input the feature matrix into the classification model to obtain the probability value that the fused facial image belongs to the living body category;
    若所述概率值大于概率阈值,则确定所述待检对象属于活体类别。If the probability value is greater than the probability threshold, it is determined that the object to be examined belongs to the living body category.
  13. 根据权利要求12所述的存储介质,其特征在于,所述分类模型包括全连接层和Softmax分类函数;The storage medium according to claim 12, wherein the classification model includes a fully connected layer and a Softmax classification function;
    所述程序被处理器执行时用于实现下述步骤:When the program is executed by the processor, the following steps are implemented:
    对所述特征矩阵输入所述全连接层,输出多维特征向量,其中,所述多维特征向量的维度数目对应于所述Softmax分类函数的类别数目;Input the fully connected layer to the feature matrix and output a multi-dimensional feature vector, wherein the number of dimensions of the multi-dimensional feature vector corresponds to the number of categories of the Softmax classification function;
    根据如下公式得到所述融合后的面部图像属于活体类别的概率值:The probability value that the fused facial image belongs to the living body category is obtained according to the following formula:
    Figure PCTCN2019100261-appb-100002
    Figure PCTCN2019100261-appb-100002
    其中,a i代表Softmax分类函数的第i个类别的概率值,z i为所述多维特征向量中的第i个值。 Where a i represents the probability value of the i-th category of the Softmax classification function, and z i is the i-th value in the multi-dimensional feature vector.
  14. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it includes:
    存储器,其上存储有计算机程序;Memory, on which computer programs are stored;
    处理器,用于执行所述存储器中的所述计算机程序,以实现下述活体检验方法所执行的步骤:The processor is configured to execute the computer program in the memory to implement the steps performed by the following biopsy method:
    获取待检对象的面部深度图和面部热像图;Obtain the facial depth map and facial thermal image of the subject to be inspected;
    将所述面部深度图和面部热像图输入特征提取模型,通过所述特征提取模型,从所述面部深度图和所述面部热像图提取特征信息;Input the facial depth map and the facial thermal image into a feature extraction model, and extract feature information from the facial depth map and the facial thermal image through the feature extraction model;
    根据所述特征信息以及分类模型,确定所述待检对象是否属于活体类别。According to the feature information and the classification model, it is determined whether the object to be examined belongs to a living body category.
  15. 根据权利要求14所述的电子设备,其特征在于,所述处理器用于执行下述步骤:The electronic device according to claim 14, wherein the processor is configured to perform the following steps:
    在同一采集时刻,获取待检对象的RGB图、深度图和热像图;At the same acquisition time, obtain the RGB image, depth map and thermal image of the object to be inspected;
    从所述RGB图像中获取面部位置坐标;Acquiring facial position coordinates from the RGB image;
    在所述深度图中,将与所述面部位置坐标对应的区域作为所述面部深度图;In the depth map, an area corresponding to the face position coordinates is used as the face depth map;
    在所述热像图中,将与所述面部位置坐标对应的区域作为所述面部热像图。In the thermal image, an area corresponding to the facial position coordinates is used as the facial thermal image.
  16. 根据权利要求15所述的电子设备,其特征在于,所述处理器用于执行下述步骤:The electronic device according to claim 15, wherein the processor is configured to perform the following steps:
    通过直方图均衡,增强所述RGB图、所述深度图或所述热像图中一种或多种图像的对比度,所述对比度用于表示面部区域与背景区域之间的反差。Through histogram equalization, the contrast of one or more images in the RGB image, the depth image or the thermal image image is enhanced, and the contrast is used to represent the contrast between the face area and the background area.
  17. 根据权利要求14所述的电子设备,其特征在于,所述特征信息包括特征矩阵,所述特征提取模型包括卷积神经网络;The electronic device according to claim 14, wherein the feature information includes a feature matrix, and the feature extraction model includes a convolutional neural network;
    所述处理器用于执行下述步骤:The processor is used to perform the following steps:
    融合所述面部深度图和所述面部热像图中位置对应的像素点,得到融合后的面部图像;Fusing pixel points corresponding to positions in the facial depth map and the facial thermal image map to obtain a fused facial image;
    通过所述卷积神经网络对所述融合后的面部图像进行特征提取,得到所述特征矩阵。Feature extraction is performed on the fused facial image through the convolutional neural network to obtain the feature matrix.
  18. 根据权利要求17所述的电子设备,其特征在于,所述处理器用于执行下述步骤:The electronic device according to claim 17, wherein the processor is configured to perform the following steps:
    将所述特征矩阵输入所述分类模型,得到所述融合后的面部图像属于活体类别的概率值;Input the feature matrix into the classification model to obtain the probability value that the fused facial image belongs to the living body category;
    若所述概率值大于概率阈值,则确定所述待检对象属于活体类别。If the probability value is greater than the probability threshold, it is determined that the object to be examined belongs to the living body category.
  19. 根据权利要求18所述的电子设备,其特征在于,所述分类模型包括全连接层和Softmax分类函数;The electronic device according to claim 18, wherein the classification model includes a fully connected layer and a Softmax classification function;
    所述处理器用于执行下述步骤:The processor is used to perform the following steps:
    对所述特征矩阵输入所述全连接层,输出多维特征向量,其中,所述多维特征向量的维度数目对应于所述Softmax分类函数的类别数目;Input the fully connected layer to the feature matrix and output a multi-dimensional feature vector, wherein the number of dimensions of the multi-dimensional feature vector corresponds to the number of categories of the Softmax classification function;
    根据如下公式得到所述融合后的面部图像属于活体类别的概率值:The probability value that the fused facial image belongs to the living body category is obtained according to the following formula:
    Figure PCTCN2019100261-appb-100003
    Figure PCTCN2019100261-appb-100003
    其中,a i代表Softmax分类函数的第i个类别的概率值,z i为所述多维特征向量中的第i个值。 Where a i represents the probability value of the i-th category of the Softmax classification function, and z i is the i-th value in the multi-dimensional feature vector.
  20. 根据权利要求17所述的电子设备,其特征在于,所述处理器用于执行下述步骤:The electronic device according to claim 17, wherein the processor is configured to perform the following steps:
    将所述面部深度图中的第一像素点的值,以及所述面部热像图中目标图像通道中第二像素点的值进行加权平均后所得到的值,作为所述融合后的面部图像中与所述第一像素点位置对应的像素点的值,其中,所述第二像素点与所述第一像素点位置相对应,所述目标图像通道是所述面部热像图中的任一图像通道。The value obtained by weighting and averaging the value of the first pixel in the facial depth map and the value of the second pixel in the target image channel in the facial thermal image is used as the fused facial image The value of the pixel corresponding to the position of the first pixel in, wherein the second pixel corresponds to the position of the first pixel, and the target image channel is any image in the facial thermal image One image channel.
PCT/CN2019/100261 2018-10-29 2019-08-12 Liveness detection method, storage medium, and electronic device WO2020088029A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811269906.8 2018-10-29
CN201811269906.8A CN111104833A (en) 2018-10-29 2018-10-29 Method and apparatus for in vivo examination, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
WO2020088029A1 true WO2020088029A1 (en) 2020-05-07

Family

ID=70419919

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/100261 WO2020088029A1 (en) 2018-10-29 2019-08-12 Liveness detection method, storage medium, and electronic device

Country Status (2)

Country Link
CN (1) CN111104833A (en)
WO (1) WO2020088029A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111651626A (en) * 2020-05-25 2020-09-11 腾讯科技(深圳)有限公司 Image classification method and device and readable storage medium
CN111862084A (en) * 2020-07-31 2020-10-30 大连东软教育科技集团有限公司 Image quality evaluation method and device based on complex network and storage medium
CN111881729A (en) * 2020-06-16 2020-11-03 深圳数联天下智能科技有限公司 Live body flow direction discrimination method, device and equipment based on thermal imaging and storage medium
CN111881786A (en) * 2020-07-13 2020-11-03 深圳力维智联技术有限公司 Store operation behavior management method, device and storage medium
CN113033307A (en) * 2021-02-22 2021-06-25 浙江大华技术股份有限公司 Object matching method and device, storage medium and electronic device
CN114202805A (en) * 2021-11-24 2022-03-18 北京百度网讯科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
CN114202806A (en) * 2021-11-26 2022-03-18 北京百度网讯科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
WO2022126914A1 (en) * 2020-12-18 2022-06-23 平安科技(深圳)有限公司 Living body detection method and apparatus, electronic device, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738176A (en) * 2020-06-24 2020-10-02 支付宝实验室(新加坡)有限公司 Living body detection model training method, living body detection device, living body detection equipment and living body detection medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106774856A (en) * 2016-08-01 2017-05-31 深圳奥比中光科技有限公司 Exchange method and interactive device based on lip reading
WO2017210331A1 (en) * 2016-06-01 2017-12-07 Carnegie Mellon University Hybrid depth and infrared image sensing system and method for enhanced touch tracking on ordinary surfaces
CN107451510A (en) * 2016-05-30 2017-12-08 北京旷视科技有限公司 Biopsy method and In vivo detection system
CN107808145A (en) * 2017-11-13 2018-03-16 河南大学 Interaction identity based on multi-modal intelligent robot differentiates and tracking and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1884197B1 (en) * 2005-05-20 2013-01-02 Hitachi Medical Corporation Image diagnosing device
US9996732B2 (en) * 2015-07-20 2018-06-12 International Business Machines Corporation Liveness detector for face verification
CN106845345A (en) * 2016-12-15 2017-06-13 重庆凯泽科技股份有限公司 Biopsy method and device
CN107590430A (en) * 2017-07-26 2018-01-16 百度在线网络技术(北京)有限公司 Biopsy method, device, equipment and storage medium
CN107609383B (en) * 2017-10-26 2021-01-26 奥比中光科技集团股份有限公司 3D face identity authentication method and device
CN107945192B (en) * 2017-12-14 2021-10-22 北京信息科技大学 Tray carton pile type real-time detection method
CN108399617B (en) * 2018-02-14 2020-08-14 中国农业大学 Method and device for detecting animal health condition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451510A (en) * 2016-05-30 2017-12-08 北京旷视科技有限公司 Biopsy method and In vivo detection system
WO2017210331A1 (en) * 2016-06-01 2017-12-07 Carnegie Mellon University Hybrid depth and infrared image sensing system and method for enhanced touch tracking on ordinary surfaces
CN106774856A (en) * 2016-08-01 2017-05-31 深圳奥比中光科技有限公司 Exchange method and interactive device based on lip reading
CN107808145A (en) * 2017-11-13 2018-03-16 河南大学 Interaction identity based on multi-modal intelligent robot differentiates and tracking and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIA, MINGGE ET AL.: "Survey on Multisensor Image Fusion", ELECTRONICS OPTICS & CONTROL, vol. 9, no. 4, 30 November 2002 (2002-11-30) *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111651626B (en) * 2020-05-25 2023-08-22 腾讯科技(深圳)有限公司 Image classification method, device and readable storage medium
CN111651626A (en) * 2020-05-25 2020-09-11 腾讯科技(深圳)有限公司 Image classification method and device and readable storage medium
CN111881729B (en) * 2020-06-16 2024-02-06 深圳数联天下智能科技有限公司 Living body flow direction screening method, device, equipment and storage medium based on thermal imaging
CN111881729A (en) * 2020-06-16 2020-11-03 深圳数联天下智能科技有限公司 Live body flow direction discrimination method, device and equipment based on thermal imaging and storage medium
CN111881786B (en) * 2020-07-13 2023-11-03 深圳力维智联技术有限公司 Store operation behavior management method, store operation behavior management device and storage medium
CN111881786A (en) * 2020-07-13 2020-11-03 深圳力维智联技术有限公司 Store operation behavior management method, device and storage medium
CN111862084B (en) * 2020-07-31 2024-02-02 东软教育科技集团有限公司 Image quality evaluation method, device and storage medium based on complex network
CN111862084A (en) * 2020-07-31 2020-10-30 大连东软教育科技集团有限公司 Image quality evaluation method and device based on complex network and storage medium
WO2022126914A1 (en) * 2020-12-18 2022-06-23 平安科技(深圳)有限公司 Living body detection method and apparatus, electronic device, and storage medium
CN113033307A (en) * 2021-02-22 2021-06-25 浙江大华技术股份有限公司 Object matching method and device, storage medium and electronic device
CN113033307B (en) * 2021-02-22 2024-04-02 浙江大华技术股份有限公司 Object matching method and device, storage medium and electronic device
CN114202805A (en) * 2021-11-24 2022-03-18 北京百度网讯科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
CN114202806A (en) * 2021-11-26 2022-03-18 北京百度网讯科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium

Also Published As

Publication number Publication date
CN111104833A (en) 2020-05-05

Similar Documents

Publication Publication Date Title
WO2020088029A1 (en) Liveness detection method, storage medium, and electronic device
WO2020207189A1 (en) Method and device for identity authentication, storage medium, and computer device
US20210082136A1 (en) Extracting information from images
KR102142232B1 (en) Face liveness detection method and apparatus, and electronic device
KR102587193B1 (en) System and method for performing fingerprint-based user authentication using images captured using a mobile device
WO2022206319A1 (en) Image processing method and apparatus, and device, storage medium and computer program product
CN111754396B (en) Face image processing method, device, computer equipment and storage medium
CN111091075B (en) Face recognition method and device, electronic equipment and storage medium
CN112232155B (en) Non-contact fingerprint identification method and device, terminal and storage medium
KR20190094352A (en) System and method for performing fingerprint based user authentication using a captured image using a mobile device
CN110059579B (en) Method and apparatus for in vivo testing, electronic device, and storage medium
CN106156702A (en) Identity identifying method and equipment
WO2020258120A1 (en) Face recognition method and device, and electronic apparatus
CN112232163B (en) Fingerprint acquisition method and device, fingerprint comparison method and device, and equipment
CN112016525A (en) Non-contact fingerprint acquisition method and device
KR20230169104A (en) Personalized biometric anti-spoofing protection using machine learning and enrollment data
CN113614731A (en) Authentication verification using soft biometrics
CN112232159B (en) Fingerprint identification method, device, terminal and storage medium
CN113642639B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
WO2022068931A1 (en) Non-contact fingerprint recognition method and apparatus, terminal, and storage medium
CN114863499B (en) Finger vein and palm vein identification method based on federal learning
Purnapatra et al. Presentation attack detection with advanced cnn models for noncontact-based fingerprint systems
CN113343198A (en) Video-based random gesture authentication method and system
WO2024169261A9 (en) Image processing method and apparatus, and electronic device, computer-readable storage medium and computer program product
CN112232157A (en) Fingerprint area detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19879992

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19879992

Country of ref document: EP

Kind code of ref document: A1