WO2019096029A1 - 活体识别方法、存储介质和计算机设备 - Google Patents

活体识别方法、存储介质和计算机设备 Download PDF

Info

Publication number
WO2019096029A1
WO2019096029A1 PCT/CN2018/114096 CN2018114096W WO2019096029A1 WO 2019096029 A1 WO2019096029 A1 WO 2019096029A1 CN 2018114096 W CN2018114096 W CN 2018114096W WO 2019096029 A1 WO2019096029 A1 WO 2019096029A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
face
computer device
recognition model
feature data
Prior art date
Application number
PCT/CN2018/114096
Other languages
English (en)
French (fr)
Inventor
吴双
丁守鸿
梁亦聪
刘尧
李季檩
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019096029A1 publication Critical patent/WO2019096029A1/zh
Priority to US16/864,103 priority Critical patent/US11176393B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • the present application relates to the field of computer technology, and in particular, to a living body identification method, a storage medium, and a computer device.
  • a living body identification method a storage medium, and a computer device are provided.
  • a living body identification method includes:
  • the computer device acquires the target image
  • the computer device extracts facial feature data of a face image in the target image
  • the computer device performs living body recognition according to the facial feature data to obtain a first confidence level; the first confidence level indicates a first probability of identifying a living body;
  • the computer device extracts background feature data from a face extension image; the face extension image is obtained by expanding an area where the face image is located;
  • the computer device performs living body recognition according to the background feature data to obtain a second confidence level; the second confidence level indicates a second probability of identifying a living body;
  • the computer device obtains a living face image discrimination result according to the first confidence level and the second confidence level.
  • a non-volatile storage medium storing computer readable instructions, when executed by one or more processors, causing one or more processors to perform the steps of: acquiring a target image;
  • the first confidence level indicating a first probability of identifying a living body
  • the face extension image is obtained by expanding an area where the face image is located;
  • the second confidence level indicating a second probability of identifying a living body
  • a computer device comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor such that the processor performs the following steps:
  • the first confidence level indicating a first probability of identifying a living body
  • the face extension image is obtained by expanding an area where the face image is located;
  • the second confidence level indicating a second probability of identifying a living body
  • FIG. 1 is an application environment diagram of a living body identification method in an embodiment
  • FIG. 2 is an application environment diagram of a living body identification method in another embodiment
  • FIG. 3 is a schematic flow chart of a living body identification method in an embodiment
  • Figure 5 is a schematic illustration of the use of a recognition model in one embodiment
  • FIG. 6 is a schematic diagram of recognition model training in an embodiment
  • FIG. 7 is a schematic flow chart of a living body identification method in another embodiment
  • Figure 8 is a block diagram showing the structure of a living body identification device in an embodiment
  • Figure 9 is a block diagram showing the structure of a living body identification device in another embodiment.
  • Figure 10 is a diagram showing the internal structure of a computer device in an embodiment.
  • Figure 11 is a diagram showing the internal structure of a computer device in another embodiment.
  • FIG. 1 is an application environment diagram of a living body identification method in an embodiment.
  • the living body recognition method is used for a living body recognition system.
  • the living body recognition system includes a terminal 110 and a server 120.
  • the terminal 110 and the server 120 are connected through a network.
  • the terminal 110 can be configured to execute the living body identification method.
  • the terminal 110 can also collect the target image including the human face from the real scene, and send the collected target image to the server 120, so that the server 120 executes the living body identification method.
  • the terminal 110 may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like.
  • the server 120 may be a stand-alone server or a server cluster composed of multiple independent servers.
  • the living body recognition method is used for a living body recognition system.
  • the living body identification system may specifically be an access control system.
  • the access control system includes a face capture camera 210 and a computer device 220.
  • the face capture camera 210 can be coupled to the computer device 220 via a communication interface.
  • the face collection camera 210 is configured to collect a target image including a face from a real scene, and send the collected target image to the computer device 220, so that the computer device 220 executes the living body recognition method.
  • Computer device 220 can be a terminal or a server.
  • FIG. 3 is a schematic flow chart of a living body identification method in an embodiment. This embodiment is mainly illustrated by the method being applied to the server 120 in FIG. 1 described above. Referring to FIG. 3, the living body identification method specifically includes the following steps:
  • the target image is an image to be subjected to recognition of a living face image.
  • the target image may be an image frame obtained by image capturing a living body, or may be an image frame obtained by remake an existing image including a human face. It can be understood that, just because the target image may be a living human face image or a non-living human face image, the embodiment provided by the present invention is a technical solution for discriminating whether the target image is a living human face image.
  • the terminal may acquire an image frame of a real scene in a current view of the camera through a built-in camera or an external camera associated with the terminal, and acquire the acquired image frame.
  • the terminal may detect whether there is a face image in the image frame. If the face image exists, the image frame is acquired as a target image and sent to the server, and the server obtains the target image.
  • the terminal may directly send the collected image frame to the server, and the server detects whether there is a face image in the image frame, and if there is a face image, the image frame is acquired as the target image.
  • the image frame of the captured real scene may be an image frame of a living body in a real scene, or an image frame of an existing image including a human face in a real scene.
  • An existing image containing a human face such as a two-dimensional image displayed on the screen, an identity document, or a face photo.
  • the terminal may invoke the camera to turn on the camera scan mode, and scan the target object in the camera field of view in real time, and generate an image frame in real time according to a certain frame rate, and the generated image frame may be cached locally in the terminal.
  • the camera field of view may be an area that the camera presented on the display interface of the terminal can scan to the captured area.
  • the terminal may detect whether there is a face image in the generated frame image, and if so, acquire the generated frame image as a target image and transmit it to the server, and the server acquires the target image.
  • the target object may be a living body in a real scene or an existing image containing a human face.
  • the built-in camera of the terminal or the external camera associated with the terminal may be called, in the current view of the camera.
  • the image frame of the real scene is collected, the acquired image frame is obtained, and then the target image is obtained, and the target image is sent to the corresponding server of the application.
  • the scenes that need to be authenticated such as real-life real-name authentication or account unblocking appeals in social applications, such as bank account opening in bank applications.
  • the face capture camera can capture the image frame of the real scene in the current field of view of the camera, and then send the acquired image frame to the computer device, and the computer device receives the image frame. Then, it is detected whether there is a face image in the image frame, and if there is a face image, the image frame is acquired as a target image.
  • the face feature data is data for reflecting the face feature.
  • the face feature data may reflect one or more feature information such as a person's gender, a contour of a face, a hairstyle, a glasses, a nose, a mouth, and a distance between respective facial organs.
  • the facial feature data may include facial texture data.
  • Facial texture data can reflect facial features, including texture features and pixel depths of the nose, ears, eyebrows, cheeks, or lips.
  • the facial texture data may include a facial image pixel point color value distribution and a facial image pixel point luminance value distribution.
  • the server may extract facial feature data of the face image in the target image according to the preset image feature extraction strategy.
  • the preset image feature extraction strategy may be a preset image feature extraction algorithm or a pre-trained feature extraction machine learning model.
  • S306 Perform living body recognition according to the face feature data to obtain a first confidence level; the first confidence level indicates a first probability of identifying the living body.
  • the confidence level corresponds to the target image one by one, and is used to indicate that the target image is a credible degree of the living face image.
  • the living face image is an image obtained by image acquisition of a living body.
  • the higher the confidence the higher the probability that the corresponding target image is a live face image. That is to say, the higher the degree of confidence, the greater the probability that the target image is an image obtained by image acquisition of a living body.
  • the first confidence here and the second confidence in the following are both confidence levels, but correspond to the confidence under different feature data conditions.
  • the server may classify the target image according to the extracted facial feature data, and classify the target image into the living facial image class when the extracted facial feature data conforms to the facial feature data of the living facial image.
  • the target image is classified into the non-living face image class.
  • the first confidence level indicates the degree of conformity between the extracted face feature data and the face feature data of the living face image, and the degree of conformity between the extracted face feature data and the face feature data of the living face image is higher. The higher the first confidence level, that is, the greater the likelihood that the target image is a living face image.
  • the server may also perform Fourier transform on the extracted facial feature data to perform feature analysis in the frequency domain.
  • the target image is classified into the living face image class.
  • the frequency domain feature of the extracted face feature data conforms to the frequency domain feature of the face feature data of the non-living face image, the target image is classified into the non-living face image class.
  • the face extension image is obtained by extending the area of the face image.
  • the face extension image includes a face image, and is an image that is intercepted based on an area in which the face image is expanded in a region in the target image.
  • the size of the face extension image is larger than the size of the face image.
  • the area where the face extension image is located may be obtained by expanding the area of the face image by four times in four directions, and the lateral size of the face extension image is three times the horizontal size of the face image, and the face extension image is extended.
  • the vertical size is three times the vertical size of the face image.
  • the ratio of the size of the face extension image to the size of the face image is not limited herein, and may be set according to the needs of the actual application scene, and only needs to satisfy the face extension image including the face image, and the face
  • the size of the extended image is larger than the size of the face image.
  • Background feature data is data that reflects the characteristics of the background portion of the image.
  • the background feature data includes a background image pixel point color value distribution and a background image pixel continuity feature and the like. It can be understood that, since the image frame obtained by remake the photo is an image frame obtained by acquiring a two-dimensional planar image, there may be a border or a boundary of the two-dimensional planar image in the image frame, and the image pixel at the border or the boundary in the image frame at this time This is not the case when the image frame acquired for the living body is an image frame obtained by acquiring a three-dimensional object from a real scene.
  • the server may obtain a face extension image formed by extending the area of the face image according to the preset area expansion manner, and then extract the face extension image in the target image according to the preset image feature extraction strategy.
  • Background feature data may be extended in only one direction or extended in multiple directions.
  • the preset image feature extraction strategy may be a preset image feature extraction algorithm or a pre-trained feature extraction machine learning model.
  • the server may extract the background feature data only for the background image other than the face image in the face extension image, or extract the background feature data for the face extension image.
  • S310 Perform living body recognition according to the background feature data to obtain a second confidence level; the second confidence level indicates a second probability of identifying the living body.
  • the server may classify the target image according to the extracted background feature data, and classify the target image into the living face image class when the extracted background feature data conforms to the background feature data of the living face image.
  • the target image is classified into the non-living face image class.
  • the second confidence level indicates the degree of conformity between the extracted background feature data and the background feature data of the living face image, and the second degree of confidence is that the degree of conformity between the extracted background feature data and the background feature data of the living face image is higher. The higher, that is, the more likely the target image is a living face image.
  • the background feature data extracted by the pre-trained machine learning model is feature data extracted by the machine learning model after learning in the training process to reflect the living face image or the non-living face image. Since the frame or boundary of the photo may exist in the image frame obtained by the remake photo, there is no border or boundary for the image frame collected for the living body, that is, the border or the boundary feature can effectively distinguish the living face image from the non-living person. Face image. It can then be understood that the extracted feature data learned by the machine learning model can include frame feature data or boundary feature data.
  • the server can fuse the two confidence levels.
  • the final confidence is obtained, thereby obtaining whether the target image is the recognition result of the living face image according to the final confidence.
  • the server may obtain the verification result of whether the identity verification passes according to the recognition result and the face recognition result, and perform the verification result correspondingly. Operation. This will ensure that the user is doing the operation to a great extent. For example, in the bank account opening in the bank application, if it is determined that the target image is a living face image and the face recognition matches, the identity verification is passed, and the subsequent opening operation is continued. For example, in the access control control scenario, if it is determined that the target image is a living face image and the face recognition matches, the identity verification is passed, and an open door command is output.
  • the above-mentioned living body recognition method can automatically extract face feature data from the face image in the target image after acquiring the target image, and then perform living body recognition according to the face feature data to obtain the probability of identifying the living body.
  • the background feature data can also be automatically extracted from the face extension image in the target image, and then the living body recognition is performed according to the background feature data, and the probability of identifying the living body is obtained, so that the target image can be obtained as the recognition of the living face image by combining the two probabilities.
  • S304 includes: determining a face region in the target image; capturing a face image in the target image according to the face region; inputting the face image into the first recognition model, and extracting the face image by using the first recognition model Face feature data.
  • the face area is the position of the face in the target image.
  • the server may identify a face region in the target image by a face detection algorithm.
  • the face detection algorithm can be customized according to requirements, such as OpenCV face detection algorithm, IOS, Android system's own face detection algorithm or excellent picture face detection algorithm.
  • the face detection algorithm can return whether the target image includes a face and a specific face region, such as identifying the position of the face through a rectangular frame.
  • the server may intercept the target image along the face area to obtain a face image.
  • the face image may include only an image of a face area.
  • Figure 4 shows a schematic diagram of multi-scale area partitioning in one embodiment.
  • the figure is a target image collected by a terminal camera.
  • the area 411 is a face area, and the image taken according to the area 411 is a face image.
  • the figure is a target image acquired by a face acquisition camera in the access control system.
  • the area 421 is a face area, and the image taken according to the area 421 is a face image.
  • the recognition model is a machine learning model with feature extraction and feature recognition after training.
  • Machine learning English is called Machine Learning, referred to as ML.
  • the machine learning model provides feature extraction and feature recognition capabilities through sample learning.
  • the machine learning model can use a neural network model, a support vector machine, or a logistic regression model. It can be understood that the first recognition model here and the second recognition model in the following are both recognition models, but extract recognition models of different feature data.
  • the first recognition model is used to extract facial feature data of the face image in the target image.
  • the first recognition model may be a complex network model formed by multiple layers interconnected.
  • the first recognition model may include a multi-layer feature extraction layer, each layer feature extraction layer has corresponding model parameters, and each layer may have multiple model parameters, and the model parameters in each layer feature extraction layer linearize the input image or Non-linear changes, the feature map (Feature Map) is obtained as the operation result.
  • Feature Map feature map
  • Each feature extraction layer receives the operation result of the previous layer, and outputs the operation result of the layer to the next layer after its own operation.
  • the model parameters are the various parameters in the model structure, which can reflect the correspondence between the output and input of each layer of the model.
  • the server After intercepting the face image, the server inputs the face image into the first recognition model, and the feature extraction layer included in the first recognition model performs a linear or non-linear change operation on the input face image layer by layer until The last layer of the feature extraction layer in the first recognition model performs a linear or non-linear change operation, and the server thus obtains the face feature data extracted for the current input image according to the output of the last layer of the feature extraction layer of the first recognition model.
  • the first recognition model may be a general machine learning model with feature extraction capabilities that has been trained to complete.
  • the general machine learning model is not effective when it is used for extraction of specific scenes, so it is necessary to further train and optimize the general machine learning model through samples dedicated to specific scenes.
  • the server may acquire a model structure and model parameters according to a general machine learning model, and import the model parameters into the first recognition model structure to obtain a first recognition model with model parameters.
  • the model parameters carried by the first recognition model participate in the training as initial parameters for training the first recognition model in this embodiment.
  • the first recognition model may also be a machine learning model that the developer initializes based on historical model training experience.
  • the server directly participates in the training as the initial parameters of the training first recognition model in the present embodiment.
  • the parameter initialization of the first recognition model may be Gaussian random initialization.
  • the face image is input into the first recognition model, and the face feature data of the face image is extracted by the first recognition model, including: inputting the face image into the first recognition model; and adopting the volume of the first recognition model
  • the face feature data of the face image is extracted in layers.
  • S306 includes: classifying the target image according to the extracted facial feature data by using the fully connected layer of the first recognition model, and obtaining a first confidence that the target image is a living facial image.
  • the convolutional layer is a feature extraction layer in a convolutional neural network.
  • the convolution layer may be a plurality of layers, each convolution layer has a corresponding convolution kernel, and each layer may have multiple convolution kernels.
  • the convolution layer performs a convolution operation on the input image by the convolution kernel, and extracts the image feature to obtain a feature map as a calculation result.
  • FC fully connected layer
  • the server inputs the face image into the first recognition model, and the convolution layer included in the first recognition model performs convolution operation on the input face image layer by layer until the first recognition
  • the convolution layer of the last layer in the model completes the convolution operation, and then the result of the output of the last layer of convolutional layer is used as the input of the fully connected layer, and the first confidence of the target image as the living face image is obtained.
  • the first confidence level may be directly the score of the target image output by the fully connected layer as a live face image.
  • the first confidence level may also be a value in the range of values (0, 1) obtained by the server normalizing the score of the output of the fully connected layer through the regression layer (softmax layer). At this time, the first confidence can also be understood as the probability that the target image is a living face image.
  • the feature map outputted by the convolution layer of the recognition model can better reflect the characteristics extracted from the corresponding input image, so that the target image can be obtained as a living body by using the fully-connected layer classification according to the feature map of the reflected feature.
  • the confidence of the face image and the recognition accuracy of the recognition model can be better reflect the characteristics extracted from the corresponding input image, so that the target image can be obtained as a living body by using the fully-connected layer classification according to the feature map of the reflected feature.
  • Figure 5 shows a schematic diagram of the use of the recognition model in one embodiment.
  • the figure is a schematic diagram used for the first recognition model.
  • the server obtains the target image
  • the face image is intercepted from the target image, and the face image is input into the first recognition model, and the multi-layer convolution layer of the first recognition model is convoluted layer by layer, and each convolution layer is received.
  • the operation result of the previous layer after its own operation, outputs the operation result of the layer to the next layer, and the last layer of the convolution layer inputs the operation result into the fully connected layer, and the full connection layer outputs the target image as the living face image.
  • the score, the regression layer (softmax layer) and then the score of the output of the fully connected layer is normalized to a value within the range of values (0, 1), that is, the first confidence.
  • the image of the face region is intercepted, and only the image of the face region is used as the input of the first recognition model, so that the face feature of the first recognition model is extracted and
  • the target image classification is performed according to the extracted facial feature data, the noise interference of the non-face region image can be avoided, and the recognition effect is better.
  • the living body recognition method further includes: acquiring an image sample set, the image sample set including a living face image and a non-living face image; and the face region of each image sample according to the image sample set, in the corresponding image sample The face image is intercepted to obtain a first training sample; and the first recognition model is trained according to the first training sample.
  • the image sample set includes several image samples.
  • the image samples may be live face images and non-living face images.
  • the ratio of the number of living face images to non-living face images may be 1:1 or other ratios.
  • the server may intercept the face image from the image sample set in the image sample set to obtain the first training sample.
  • the server may use the face image intercepted from the living face image as a positive training sample, and the face image taken from the non-living face image as a negative training sample, and the first recognition is trained by the positive and negative training samples.
  • the classification ability of the model to classify the target image as a living face image or a non-living face image.
  • training the first recognition model according to the first training sample comprises: acquiring an initialized first recognition model; determining a first training label corresponding to the first training sample; and inputting the first training sample into the first recognition model Obtaining a first recognition result; adjusting a model parameter of the first recognition model according to a difference between the first recognition result and the first training label, and continuing training until the training stop condition is satisfied.
  • the initialized first recognition model may be that the model parameters of the universally-recognized machine learning model that have been trained are imported into the first recognition model structure to obtain a first recognition model with model parameters.
  • the model parameters carried by the first recognition model participate in the training as initial parameters for training the first recognition model in this embodiment.
  • the first recognition model initialized may also be a machine learning model initialized by the developer based on the historical model training experience.
  • the server directly participates in the training as the initial parameters of the training first recognition model in the present embodiment.
  • the parameter initialization of the first recognition model may be Gaussian random initialization.
  • the server may add a training tag to each of the first training samples.
  • the training tag is used to indicate whether the image sample from which the first training sample is taken is a living face image.
  • the server trains the first recognition model based on the first training sample and the corresponding added training tag.
  • the first recognition model outputs the first recognition result, and the server may perform the first recognition result with the training label of the input first training sample. Contrast and adjust the model parameters of the first recognition model toward the direction of decreasing the difference.
  • the training stop condition may be the preset number of iterations, or the trained machine learning model may reach the classification performance index.
  • the classification performance indicator may be that the classification correct rate reaches a first preset threshold, or the classification error rate is lower than a second preset threshold.
  • the server may also divide a portion of the training samples from the first training sample for use as test samples.
  • the test sample is a sample used for model correction after model training.
  • the first identification model obtained by the training is calibrated by using the test sample, and specifically, the test sample is input into the first recognition model obtained by the training, and the output of the first recognition model is compared with the training label of the test sample, if both If the difference between the two falls within the allowable error range, the calibration of the first recognition model is completed. If the difference between the two falls outside the allowable error range, the parameter is adjusted for the first recognition model, and the two are reduced. The difference between the two to complete the calibration of the first recognition model.
  • the server may also establish a cost function according to the actual output and the expected output of the first recognition model, minimize the cost function by using a random gradient descent method, and update the model parameters of the first recognition model.
  • a cost function such as a variance cost function or a cross entropy cost function.
  • the first recognition model is trained by the living face image and the non-living face image, and the model parameters can be dynamically adjusted according to the classification performance of the machine learning model, so that the training task can be completed more accurately and efficiently.
  • Figure 6 shows a schematic diagram of the use of the recognition model in one embodiment.
  • the figure is a schematic diagram of training by the first recognition model.
  • the server obtains the image sample, the face image is intercepted from the image sample as the first training sample, and a training tag is added for the first training sample.
  • the server then inputs the first training sample into the first recognition model, and the multi-layer convolution layer of the first recognition model is convoluted layer by layer, and each convolution layer receives the operation result of the previous layer, and after its own operation, One layer outputs the operation result of the layer, and the last layer of the convolution layer inputs the operation result into the fully connected layer, and the fully connected layer outputs the classification result of the training sample.
  • the server then builds a cost function based on the difference between the classification result and the training tag, and adjusts the model parameters by minimizing the cost function.
  • the powerful learning and presentation ability of the machine learning model is used for the recognition ability learning, and the trained machine learning model identifies whether the target image is a living face image, and the target image is more recognized than the conventional method. it is good.
  • S308 includes: determining a face region in the target image; expanding the face region to obtain a face extension region; capturing a face extension image in the target image according to the face extension region; and inputting the face extension image into the first
  • the second recognition model extracts background feature data of the face extension image by the second recognition model.
  • the face extension image includes a face image, and is an image that is intercepted based on an area in which the face image is expanded in a region in the target image.
  • the size of the face extension image is larger than the size of the face image.
  • the server may preset an extension manner for expanding the face extension image, and after determining the face region in the target image, expand the face extension region according to the extension manner.
  • the server then takes a screenshot of the target image along the face extension area to obtain a face extension image.
  • the preset area expansion manner may be extended in only one direction or extended in multiple directions.
  • the target image acquired by the terminal camera can directly use the target image as a face extension image because the field of view of the camera is small.
  • the figure is a target image collected by a terminal camera.
  • the area 411 is a face area
  • the area 412 is a face extension area obtained by the extended area 411
  • the image taken according to the area 412 is a face extension image.
  • the figure is a target image acquired by a face acquisition camera in the access control system.
  • the area 421 is a face area
  • the area 422 is a face extension area obtained by the extension area 421
  • the image taken according to the area 422 is a face extension image.
  • the second recognition model is for extracting background feature data of the face extension image in the target image.
  • the server After intercepting the face extension image, the server inputs the face extension image into the second recognition model, and the feature extraction layer included in the second recognition model performs linear or non-linear change operation on the input face image layer by layer. Until the last layer of the feature extraction layer in the second recognition model completes the linear or non-linear change operation, the server thus obtains the background feature data extracted for the current input image according to the output of the last layer of the feature extraction layer of the second model.
  • the face extension image is input into the second recognition model, and the background feature data of the face extension image is extracted by the second recognition model, including: inputting the face extension image into the second recognition model; and adopting the second recognition model
  • the convolutional layer extracts background feature data of the face extension image.
  • S310 includes: classifying the target image according to the extracted background feature data by using a fully connected layer of the second recognition model, and obtaining a second confidence that the target image is a living face image.
  • the server inputs the face extension image into the second recognition model, and the convolution layer included in the second recognition model performs convolution operation on the input face image layer by layer until the first The convolution operation is performed on the last layer of the convolution layer in the second recognition model, and the result of the output of the convolution layer of the last layer is used as the input of the fully connected layer, and the second confidence of the target image as the living face image is obtained.
  • the second confidence level may be directly the score of the target image output by the fully connected layer as the live face image.
  • the second confidence may also be a value in the range of values (0, 1) obtained by the server normalizing the fraction of the output of the fully connected layer through the regression layer (softmax layer). At this time, the second confidence can also be understood as the probability that the target image is a living face image.
  • the feature map outputted by the convolution layer of the recognition model can better reflect the characteristics extracted from the corresponding input image, so that the target image can be obtained as a living body by using the fully-connected layer classification according to the feature map of the reflected feature.
  • the confidence of the face image and the recognition accuracy of the recognition model can be better reflect the characteristics extracted from the corresponding input image, so that the target image can be obtained as a living body by using the fully-connected layer classification according to the feature map of the reflected feature.
  • this figure is a schematic diagram used for the second recognition model.
  • the server acquires the target image
  • the face extension image is intercepted from the target image
  • the face extension image is input into the second recognition model
  • the multi-layer convolution layer of the second recognition model is convoluted layer by layer, each convolution
  • the layer receives the operation result of the previous layer, and outputs the operation result of the layer to the next layer through its own operation.
  • the final layer of the convolution layer then inputs the operation result into the fully connected layer, and the full connection layer outputs the target image as the living face.
  • the score of the image, the regression layer (softmax layer) and then the score of the output of the fully connected layer is normalized to obtain a value within the range of values (0, 1), that is, the second confidence.
  • the server may fuse the first confidence and the second confidence to obtain a confidence that the target image is a living face image.
  • the face expansion image is intercepted on the target image, and the background feature data is extracted on the face extension image to identify whether the target image is a living image according to the background feature data, because the background feature data includes an environment surrounding the face.
  • the information can effectively avoid the influence of the picture frame when the remake picture is posing as a real person, and the recognition effect is improved.
  • the living body recognition method further includes: acquiring an image sample set, the image sample set including a living face image and a non-living face image; according to the face extension area of each image sample in the image sample set, in the corresponding image sample The face expansion image is intercepted to obtain a second training sample; and the second recognition model is trained according to the second training sample.
  • the server may extract the face extension image from the image sample set in the image sample set to obtain the second training sample.
  • the server may use the face extension image extracted from the living face image as a positive training sample, and the face extension image extracted from the non-living face image as a negative training sample, and the training sample is trained by the positive and negative training samples.
  • the classification ability of the model is identified to classify the target image as a living face image or a non-living face image.
  • training the second recognition model according to the second training sample comprises: acquiring an initialized second recognition model; determining a second training tag corresponding to the second training sample; and inputting the second training sample into the second recognition model Obtaining a second recognition result; adjusting the model parameters of the second recognition model according to the difference between the second recognition result and the second training tag, and continuing the training until the training stop condition is satisfied.
  • the second recognition model can be trained by the training method of training the first recognition model described above.
  • the first recognition model and the second recognition model are independent of each other and can be trained in parallel.
  • the figure is a schematic diagram of training by the second recognition model.
  • the server obtains the image sample, the face extension image is taken out from the image sample as a second training sample, and a training label is added for the second training sample.
  • the server then inputs the second training sample into the second recognition model, and the multi-layer convolution layer of the second recognition model performs convolution operation layer by layer, and each convolution layer receives the operation result of the previous layer, and after its own operation, One layer outputs the operation result of the layer, and the last layer of the convolution layer inputs the operation result into the fully connected layer, and the fully connected layer outputs the classification result of the training sample.
  • the server then builds a cost function based on the difference between the classification result and the training tag, and adjusts the model parameters by minimizing the cost function.
  • the first recognition model and the front convolution layer of the second recognition model can share the front convolution layer. , divided into two branches to train separately. This can improve the efficiency of model training.
  • the server may also jointly train the first recognition model and the second recognition model. Specifically, the server may respectively obtain the feature maps of the first layer of the first recognition model and the second layer of the second recognition model, and fuse the feature images output by the two models to obtain a feature fusion map, so that the feature fusion map includes both
  • the face feature data includes the background feature data
  • the server can use the feature fusion map as the input of the fully connected layer, and output the target image as the confidence of the living face image through the fully connected layer.
  • S302 includes: entering an image capturing state; in the image capturing state, selecting the captured image frame as a target image, and selecting a face region of the image frame to match a preset face region in the collected field of view.
  • the image acquisition state is a state in which the camera scans for image acquisition.
  • the built-in camera of the terminal or an external camera associated with the terminal may be used to scan the target object in the current field of view of the camera. And set the preset face area in the current field of view of the camera.
  • the terminal collects the image frame according to the preset frame rate, and compares the face area in the captured image frame with the preset face area. When the face area of the image frame matches the preset face area in the collected view, Then, the image frame is selected as the target image to perform candidate living face image discrimination.
  • S312 includes: merging the first confidence level and the second confidence level to obtain a confidence that the target image is a living face image; and when the confidence level reaches a preset reliability threshold, determining that the target image is a living human face image.
  • the server may combine the first confidence level and the second confidence level by using a preset fusion manner to obtain a confidence that the final target image is a living face image. For example, the server can compare the first confidence with the second confidence, with the confidence of the smaller of the two confidences as the final confidence. For another example, the server may obtain a weighted average of the first confidence and the second confidence, and use the calculated weighted average as the final confidence. In the calculation process, the weights of the first confidence level and the second confidence level may be adjusted according to actual scene needs. In the scenario where the face feature data has a large influence, the weight of the first confidence is greater than the weight of the second confidence. In the case where the background feature data has a large influence, the reverse is true.
  • the server may further compare the calculated final confidence with the preset reliability threshold, and when the final confidence reaches the preset reliability threshold, determine that the target image is a living face image, when the final confidence is reached. When the degree is less than the preset reliability threshold, it is determined that the target image is a non-living face image.
  • the pre-set confidence threshold is a threshold set according to experience, and the confidence identifier above the threshold may be considered to be a live-image.
  • the effect of the face feature and the background feature is comprehensively considered by fusing the confidence obtained from the recognition of the two image features, thereby improving the living body recognition effect.
  • the living body identification method specifically includes the following steps:
  • S706 Acquire an initialized first recognition model; determine a first training label corresponding to the first training sample; input the first training sample into the first recognition model to obtain a first recognition result; according to the first identification result and the first training label Difference, adjust the model parameters of the first recognition model and continue training until the training stop condition is met.
  • S710 Acquire an initialized second recognition model; determine a second training label corresponding to the second training sample; input the second training sample into the second recognition model to obtain a second recognition result; according to the second identification result and the second training label Difference, adjust the model parameters of the second recognition model and continue training until the training stop condition is met.
  • S712 acquire a target image; and determine a face region in the target image.
  • the face image is intercepted in the target image according to the face region.
  • the face image is input into the first recognition model; and the face feature data of the face image is extracted by the convolution layer of the first recognition model.
  • S718 classify the target image according to the extracted facial feature data by using the fully connected layer of the first recognition model, and obtain a first confidence that the target image is a living facial image.
  • S722 Input the face extension image into the second recognition model; extract the background feature data of the face extension image by using the convolution layer of the second recognition model.
  • S724 classify the target image according to the extracted background feature data by using the fully connected layer of the second recognition model, and obtain a second confidence that the target image is a living face image.
  • S728 Determine whether the confidence level reaches the preset reliability threshold; if yes, go to step S730; if no, go to step S732.
  • the face feature data of the face image in the target image is automatically extracted, and then the living body recognition is performed according to the face feature data, and the probability of identifying the living body is obtained.
  • the background feature data can also be automatically extracted from the face extension image in the target image, and then the living body recognition is performed according to the background feature data, and the probability of identifying the living body is obtained, so that the target image can be obtained as the recognition of the living face image by combining the two probabilities.
  • a living body identification device 800 is provided.
  • the living body identification device 800 includes an acquisition module 801 , a first extraction module 802 , a first identification module 803 , a second extraction module 804 , a second identification module 805 , and an output module 806 .
  • the obtaining module 801 is configured to acquire a target image.
  • the first extraction module 802 is configured to extract facial feature data of the face image in the target image.
  • the first identification module 803 is configured to perform living body recognition according to the facial feature data to obtain a first confidence degree; the first confidence level indicates a first probability of identifying the living body.
  • the second extraction module 804 is configured to extract background feature data from the face extension image; the face extension image is obtained by extending the area where the face image is located.
  • the second identification module 805 is configured to perform living body recognition according to the background feature data to obtain a second confidence level; the second confidence level indicates a second probability of identifying the living body.
  • the output module 806 is configured to obtain, according to the first confidence level and the second confidence, a recognition result of the target image as a living face image.
  • the living body recognition device 800 can automatically extract the face feature data of the face image in the target image, and perform the living body recognition according to the face feature data to obtain the probability of identifying the living body, and the other
  • the aspect also automatically extracts the background feature data from the face extension image in the target image, and then performs the living body recognition according to the background feature data to obtain the probability of identifying the living body, so that combining the two probabilities can obtain whether the target image is a living face image.
  • the recognition result not only ensures the accuracy of the living body detection to a certain extent, but also avoids the time consuming required by the user to cooperate with the interaction, thereby improving the efficiency of the living body detection.
  • the first extraction module 802 is further configured to: determine a face region in the target image; intercept the face image in the target image according to the face region; input the face image into the first recognition model, and adopt the first recognition The model extracts facial feature data of the face image.
  • the first extraction module 802 is further configured to input the face image into the first recognition model; and extract the face feature data of the face image by using the convolution layer of the first recognition model.
  • the first identification module 803 is further configured to classify the target image according to the extracted facial feature data by using the fully connected layer of the first recognition model, to obtain a first confidence that the target image is a living facial image.
  • the living body identification device 800 further includes a model training module 807.
  • the model training module 807 is configured to acquire an image sample set, where the image sample set includes a living face image and a non-living face image; and according to the face region of each image sample in the image sample set, the face image is intercepted in the corresponding image sample, and the image is obtained. a first training sample; training the first recognition model according to the first training sample.
  • the model training module 807 is further configured to acquire an initialized first recognition model; determine a first training label corresponding to the first training sample; and input the first training sample into the first recognition model to obtain a first recognition result; According to the difference between the first recognition result and the first training tag, the model parameters of the first recognition model are adjusted and the training is continued until the training stop condition is satisfied.
  • the second extraction module 804 is further configured to determine a face region in the target image; the extended face region obtains a face extension region; and the face extension image is intercepted in the target image according to the face extension region; The face extension image is input to the second recognition model, and the background feature data of the face extension image is extracted by the second recognition model.
  • the second extraction module 804 is further configured to input the face extension image into the second recognition model; and extract the background feature data of the face extension image by the convolution layer of the second recognition model.
  • the second identification module 805 is further configured to classify the target image according to the extracted background feature data by using the fully connected layer of the second recognition model, and obtain a second confidence that the target image is a living face image.
  • the model training module 807 is further configured to acquire an image sample set, where the image sample set includes a living face image and a non-living face image; according to the face extension area of each image sample in the image sample set, in the corresponding image sample The face expansion image is intercepted to obtain a second training sample; and the second recognition model is trained according to the second training sample.
  • the model training module 807 is further configured to acquire an initialized second recognition model; determine a second training tag corresponding to the second training sample; and input the second training sample into the second recognition model to obtain a second recognition result; According to the difference between the second recognition result and the second training tag, the model parameters of the second recognition model are adjusted and the training is continued until the training stop condition is satisfied.
  • the obtaining module 801 is further configured to enter an image capturing state; in the image capturing state, the captured image frame is selected as the target image, and the face region of the selected image frame and the preset face region in the collected field of view are selected. match.
  • the output module 806 is further configured to fuse the first confidence and the second confidence to obtain a confidence that the target image is a living face image; when the confidence reaches a preset reliability threshold, determine that the target image is Live face image.
  • Figure 10 is a diagram showing the internal structure of a computer device in one embodiment.
  • the computer device may specifically be the terminal 110 of FIG. 1 or the computer device 220 of FIG.
  • the computer device includes a processor, a memory, a network interface, a camera, and a display screen connected by a system bus.
  • the memory comprises a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium of the computer device stores an operating system and can also store computer readable instructions that, when executed by the processor, cause the processor to implement a live identification method.
  • the internal memory can also store computer readable instructions that, when executed by the processor, cause the processor to perform a live identification method.
  • the display of the computer device can be a liquid crystal display or an electronic ink display.
  • FIG. 10 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation of the computer device to which the solution of the present application is applied.
  • the specific computer device may It includes more or fewer components than those shown in the figures, or some components are combined, or have different component arrangements.
  • Figure 11 is a diagram showing the internal structure of a computer device in one embodiment.
  • the computer device may specifically be the server 120 of FIG. 1 or the computer device 220 of FIG.
  • the computer device includes a processor, a memory, and a network interface connected by a system bus.
  • the memory comprises a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium of the computer device stores an operating system and can also store computer readable instructions that, when executed by the processor, cause the processor to implement a live identification method.
  • the internal memory can also store computer readable instructions that, when executed by the processor, cause the processor to perform a live identification method. It will be understood by those skilled in the art that the structure shown in FIG.
  • 11 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation of the computer device to which the solution of the present application is applied.
  • the specific computer device may It includes more or fewer components than those shown in the figures, or some components are combined, or have different component arrangements.
  • the living body identification device provided by the present application may be implemented in the form of a computer readable instruction, which may be run on a computer device as shown in FIG. 10 or FIG.
  • the dysfunctional storage medium can store various instruction modules constituting the living body identification device, for example, the acquisition module 801, the first extraction module 802, the first identification module 803, the second extraction module 804, and the second identification module 805 shown in FIG. And output module 806 and the like.
  • the computer readable instructions comprising the various instruction modules cause the processor to perform the steps in the living body identification method of the various embodiments of the present application described in this specification.
  • the computer device shown in FIG. 11 can acquire a target image by the acquisition module 801 in the living body recognition device 800 as shown in FIG.
  • the face feature data of the face image in the target image is extracted by the first extraction module 802.
  • the first recognition module 803 performs the living body recognition according to the facial feature data to obtain a first confidence degree; the first confidence level indicates the first probability of identifying the living body.
  • the background feature data is extracted from the face extension image by the second extraction module 804; the face extension image is obtained by extending the region of the face image.
  • the second recognition module 805 performs biometric recognition based on the background feature data to obtain a second confidence level; the second confidence level indicates a second probability of identifying the living body.
  • the output module 806 obtains the recognition result of the target image as the living face image according to the first confidence level and the second confidence level.
  • a computer readable storage medium having stored thereon a computer program, the computer program being executed by a processor, causing the processor to perform the steps of: acquiring a target image; extracting a target Face feature data of the face image in the image; performing living body recognition according to the face feature data to obtain a first confidence; the first confidence level indicates the first probability of identifying the living body; and extracting the background feature data from the face extension image;
  • the face extension image is obtained by extending the area of the face image; the living body recognition is performed according to the background feature data to obtain a second confidence; the second confidence level indicates the second probability of identifying the living body; according to the first confidence and the second confidence Degree, the target image is obtained as the recognition result of the living face image.
  • extracting facial feature data of the facial image in the target image comprises: determining a facial region in the target image; intercepting the facial image in the target image according to the facial region; and inputting the facial image into the first The model is identified, and the face feature data of the face image is extracted by the first recognition model.
  • the face image is input into the first recognition model, and the face feature data of the face image is extracted by the first recognition model, including: inputting the face image into the first recognition model; and adopting the volume of the first recognition model
  • the face feature data of the face image is extracted in layers.
  • Performing the living body recognition according to the facial feature data to obtain the first confidence degree includes: classifying the target image according to the extracted facial feature data by using the fully connected layer of the first recognition model, and obtaining the target image as the living human face image. A confidence level.
  • the processor when the computer program is executed by the processor, the processor further causes the processor to: acquire an image sample set comprising a living face image and a non-living face image; and each image sample according to the image sample set The face area intercepts the face image in the corresponding image sample to obtain the first training sample; and the first recognition model is trained according to the first training sample.
  • training the first recognition model according to the first training sample comprises: acquiring an initialized first recognition model; determining a first training label corresponding to the first training sample; and inputting the first training sample into the first recognition model Obtaining a first recognition result; adjusting a model parameter of the first recognition model according to a difference between the first recognition result and the first training label, and continuing training until the training stop condition is satisfied.
  • extracting background feature data from the face extension image comprises: determining a face region in the target image; extending the face region to obtain a face extension region; and intercepting the face in the target image according to the face extension region
  • the image is expanded; the face extension image is input to the second recognition model, and the background feature data of the face extension image is extracted by the second recognition model.
  • the face extension image is input into the second recognition model, and the background feature data of the face extension image is extracted by the second recognition model, including: inputting the face extension image into the second recognition model; and adopting the second recognition model
  • the convolutional layer extracts background feature data of the face extension image.
  • Performing the living body recognition according to the background feature data to obtain the second confidence includes: classifying the target image according to the extracted background feature data by using the fully connected layer of the second recognition model, and obtaining the second image of the target image as the living face image. degree.
  • the processor when the computer program is executed by the processor, the processor further causes the processor to: acquire an image sample set comprising a living face image and a non-living face image; and each image sample according to the image sample set The face extension area intercepts the face extension image in the corresponding image sample to obtain a second training sample; and the second recognition model is trained according to the second training sample.
  • training the second recognition model according to the second training sample comprises: acquiring an initialized second recognition model; determining a second training tag corresponding to the second training sample; and inputting the second training sample into the second recognition model Obtaining a second recognition result; adjusting the model parameters of the second recognition model according to the difference between the second recognition result and the second training tag, and continuing the training until the training stop condition is satisfied.
  • acquiring the target image includes: entering an image capturing state; selecting, in the image capturing state, the captured image frame as the target image, the selected face region of the image frame, and the preset face region in the collected field of view. match.
  • the target image is obtained as a recognition result of the living face image according to the first confidence level and the second confidence level, including: combining the first confidence degree and the second confidence degree to obtain the target image as a living body face image. Confidence; when the confidence reaches the pre-set confidence threshold, the target image is determined to be a live face image.
  • the storage medium can automatically extract the facial feature data of the face image in the target image, and then perform the living body recognition according to the facial feature data to obtain the probability of identifying the living body, and on the other hand,
  • the background feature data can be automatically extracted from the face extension image in the target image, and then the living body recognition is performed according to the background feature data, and the probability of identifying the living body is obtained, so that the target image can be obtained as the recognition result of the living face image by combining the two probabilities. It not only ensures the accuracy of the living body detection to a certain extent, but also avoids the time consuming required by the user to cooperate with the interaction, thereby improving the efficiency of the living body detection.
  • a computer apparatus including a memory and a processor, wherein the memory stores a computer program, and when the computer program is executed by the processor, causing the processor to perform the following steps: acquiring a target image; extracting a target image Face feature data of the face image; performing living body recognition according to the face feature data to obtain a first confidence; the first confidence level indicates the first probability of identifying the living body; the background feature data is extracted from the face extension image; the face extension The image is obtained by extending the area of the face image; the living body recognition is performed according to the background feature data to obtain a second confidence; the second confidence indicates a second probability of identifying the living body; and according to the first confidence and the second confidence, The target image is the recognition result of the living face image.
  • extracting facial feature data of the facial image in the target image comprises: determining a facial region in the target image; intercepting the facial image in the target image according to the facial region; and inputting the facial image into the first The model is identified, and the face feature data of the face image is extracted by the first recognition model.
  • the face image is input into the first recognition model, and the face feature data of the face image is extracted by the first recognition model, including: inputting the face image into the first recognition model; and adopting the volume of the first recognition model
  • the face feature data of the face image is extracted in layers.
  • Performing the living body recognition according to the facial feature data to obtain the first confidence degree includes: classifying the target image according to the extracted facial feature data by using the fully connected layer of the first recognition model, and obtaining the target image as the living human face image. A confidence level.
  • the processor when the computer program is executed by the processor, the processor further causes the processor to: acquire an image sample set comprising a living face image and a non-living face image; and each image sample according to the image sample set The face area intercepts the face image in the corresponding image sample to obtain the first training sample; and the first recognition model is trained according to the first training sample.
  • training the first recognition model according to the first training sample comprises: acquiring an initialized first recognition model; determining a first training label corresponding to the first training sample; and inputting the first training sample into the first recognition model Obtaining a first recognition result; adjusting a model parameter of the first recognition model according to a difference between the first recognition result and the first training label, and continuing training until the training stop condition is satisfied.
  • extracting background feature data from the face extension image comprises: determining a face region in the target image; extending the face region to obtain a face extension region; and intercepting the face in the target image according to the face extension region
  • the image is expanded; the face extension image is input to the second recognition model, and the background feature data of the face extension image is extracted by the second recognition model.
  • the face extension image is input into the second recognition model, and the background feature data of the face extension image is extracted by the second recognition model, including: inputting the face extension image into the second recognition model; and adopting the second recognition model
  • the convolutional layer extracts background feature data of the face extension image.
  • Performing the living body recognition according to the background feature data to obtain the second confidence includes: classifying the target image according to the extracted background feature data by using the fully connected layer of the second recognition model, and obtaining the second image of the target image as the living face image. degree.
  • the processor when the computer program is executed by the processor, the processor further causes the processor to: acquire an image sample set comprising a living face image and a non-living face image; and each image sample according to the image sample set The face extension area intercepts the face extension image in the corresponding image sample to obtain a second training sample; and the second recognition model is trained according to the second training sample.
  • training the second recognition model according to the second training sample comprises: acquiring an initialized second recognition model; determining a second training tag corresponding to the second training sample; and inputting the second training sample into the second recognition model Obtaining a second recognition result; adjusting the model parameters of the second recognition model according to the difference between the second recognition result and the second training tag, and continuing the training until the training stop condition is satisfied.
  • acquiring the target image includes: entering an image capturing state; selecting, in the image capturing state, the captured image frame as the target image, the selected face region of the image frame, and the preset face region in the collected field of view. match.
  • the target image is obtained as a recognition result of the living face image according to the first confidence level and the second confidence level, including: combining the first confidence degree and the second confidence degree to obtain the target image as a living body face image. Confidence; when the confidence reaches the pre-set confidence threshold, the target image is determined to be a live face image.
  • the computer device can automatically extract facial feature data from the face image in the target image, and then perform living body recognition according to the facial feature data to obtain the probability of identifying the living body, and on the other hand,
  • the background feature data can be automatically extracted from the face extension image in the target image, and then the living body recognition is performed according to the background feature data, and the probability of identifying the living body is obtained, so that the target image can be obtained as the recognition result of the living face image by combining the two probabilities. It not only ensures the accuracy of the living body detection to a certain extent, but also avoids the time consuming required by the user to cooperate with the interaction, thereby improving the efficiency of the living body detection.
  • Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in a variety of formats, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization chain.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • Synchlink DRAM SLDRAM
  • Memory Bus Radbus
  • RDRAM Direct RAM
  • DRAM Direct Memory Bus Dynamic RAM
  • RDRAM Memory Bus Dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种活体识别方法,包括:获取目标图像;提取所述目标图像中人脸图像的人脸特征数据;根据所述人脸特征数据进行活体识别,得到第一置信度;所述第一置信度表示识别到活体的第一概率;从人脸扩展图像中提取背景特征数据;所述人脸扩展图像是扩展所述人脸图像所在区域得到的;根据所述背景特征数据进行活体识别,得到第二置信度;所述第二置信度表示识别到活体的第二概率;根据所述第一置信度和所述第二置信度,得到活体人脸图像判别结果。

Description

活体识别方法、存储介质和计算机设备
本申请要求于2017年11月20日提交中国专利局,申请号为2017111590398,申请名称为“活体识别方法、装置、存储介质和计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种活体识别方法、存储介质和计算机设备。
背景技术
随着计算机技术的不断发展,用户可以通过计算机完成的操作越来越多,比如申请贷款、远程考试或者远程遥控等。用户在进行各种操作前通常需要进行身份验证,而人脸活体判别作为身份验证的一种有效手段,已在较多的场景中有所应用。
传统的人脸活体判别技术中,通常需要结合一定的交互动作,比如摇头、眨眼等,以通过检测交互动作来区分真人和照片。然而,这种判别方式需要用户配合,只有用户按照提示做出正确的交互动作后,才能通过活体检测,从而导致活体检测率低。
发明内容
根据本申请提供的各种实施例,提供一种活体识别方法、存储介质和计算机设备。
一种活体识别方法,包括:
计算机设备获取目标图像;
所述计算机设备提取所述目标图像中人脸图像的人脸特征数据;
所述计算机设备根据所述人脸特征数据进行活体识别,得到第一置信度;所述第一置信度表示识别到活体的第一概率;
所述计算机设备从人脸扩展图像中提取背景特征数据;所述人脸扩展图像是扩展所述人脸图像所在区域得到的;
所述计算机设备根据所述背景特征数据进行活体识别,得到第二置信度;所述第二置信度表示识别到活体的第二概率;及
所述计算机设备根据所述第一置信度和所述第二置信度,得到活体人脸图像判别结果。
一种存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:获取目标图像;
提取所述目标图像中人脸图像的人脸特征数据;
根据所述人脸特征数据进行活体识别,得到第一置信度;所述第一置信度表示识别到活体的第一概率;
从人脸扩展图像中提取背景特征数据;所述人脸扩展图像是扩展所述人脸图像所在区域得到的;
根据所述背景特征数据进行活体识别,得到第二置信度;所述第二置信度表示识别到活体的第二概率;及
根据所述第一置信度和所述第二置信度,得到所述目标图像为活体人脸图像的识别结果。
一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:
获取目标图像;
提取所述目标图像中人脸图像的人脸特征数据;
根据所述人脸特征数据进行活体识别,得到第一置信度;所述第一置信度表示识别到活体的第一概率;
从人脸扩展图像中提取背景特征数据;所述人脸扩展图像是扩展所述人脸图像所在区域得到的;
根据所述背景特征数据进行活体识别,得到第二置信度;所述第二置信度表示识别到活体的第二概率;及
根据所述第一置信度和所述第二置信度,得到所述目标图像为活体人脸图像的识别结果。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中活体识别方法的应用环境图;
图2为另一个实施例中活体识别方法的应用环境图;
图3为一个实施例中活体识别方法的流程示意图;
图4为一个实施例中多尺度区域划分的示意图;
图5为一个实施例中识别模型使用的示意图;
图6为一个实施例中识别模型训练的示意图;
图7为另一个实施例中活体识别方法的流程示意图;
图8为一个实施例中活体识别装置的模块结构图;
图9为另一个实施例中活体识别装置的模块结构图;
图10为一个实施例中计算机设备的内部结构图;及
图11为另一个实施例中计算机设备的内部结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
图1为一个实施例中活体识别方法的应用环境图。参照图1,该活体识别方法用于活体识别系统。该活体识别系统包括终端110和服务器120。其中,终端110和服务器120通过网络连接。终端110可用于执行该活体识别方法,终端110也可从现实场景中采集包含人脸的目标图像,将采集得到的目标图像发送至服务器120,使得服务器120执行该活体识别方法。终端110具体可以是手机、平板电脑、笔记本电脑等中的至少一种。服务器120具体可以是独立的服务器,也可以是多个独立的服务器组成的服务器集群。
图2为另一个实施例中活体识别方法的应用环境图。参照图2,该活体识别方法用于活体识别系统。该活体识别系统具体可以是门禁控制系统。该门禁控制系统包括人脸采集摄像头210和计算机设备220。人脸采集摄像头210可通过通信接口与计算机设备220连接。人脸采集摄像头210用于从现实场景中采集包含人脸的目标图像,将采集得到的目标图像发送至计算机设备220,使得计算机设备220执行该活体识别方法。计算机设备220可以是终端,也可以是服务器。
图3为一个实施例中活体识别方法的流程示意图。本实施例主要以该方法应用于上述图1中的服务器120来举例说明。参照图3,该活体识别方法具体包括如下步骤:
S302,获取目标图像。
其中,目标图像是待进行活体人脸图像判别的图像。目标图像可以是对活体进行图像采集得到的图像帧,也可以是对包含人脸的已有图像翻拍得到的图像帧。可以理解的是,正是因为目标图像可以是活体人脸图像,也可以是非活体人脸图像,因此本发明所提供的实施例正是用于判别目标图像是否为活体人脸图像的技术方案。
具体地,终端可通过内置的摄像头,或者外置的与终端关联的摄像头, 在摄像头当前的视野下,采集现实场景的图像帧,获取采集得到的图像帧。终端在采集到图像帧后,可检测该图像帧中是否存在人脸图像,若存在人脸图像,则获取该图像帧作为目标图像并发送至服务器,服务器从而获取到目标图像。终端也可在采集到图像帧后,直接将采集的图像帧发送至服务器,服务器再检测该图像帧中是否存在人脸图像,若存在人脸图像,则获取该图像帧作为目标图像。
其中,采集现实场景的图像帧可以是采集现实场景中活体的图像帧,也可以是采集现实场景中包含人脸的已有图像的图像帧。包含人脸的已有图像,比如屏幕上显示的二维图像、身份证件或者人脸照片等。
在一个实施例中,终端可调用摄像头开启摄像扫描模式,并实时扫描摄像头视野中的目标对象,并按照一定的帧率实时地生成图像帧,所生成的图像帧可缓存在终端本地。其中,摄像头视野可以是在终端的显示界面上所呈现出的摄像头可扫描到拍摄到的区域。终端可检测所生成的帧图像中是否存在人脸图像,若是,则获取所生成的帧图像作为目标图像并发送至服务器,服务器从而获取到目标图像。其中,目标对象可以是现实场景中的活体,也可以是包含人脸的已有图像。
在一个实施例中,运行于终端上的应用程序在与相应的服务器交互并需要进行身份验证时,可调用终端内置的摄像头,或者外置的与终端关联的摄像头,在摄像头当前的视野下,采集现实场景的图像帧,获取采集得到的图像帧,进而得到目标图像,将目标图像发送至应用程序相应的服务器。其中,需要进行身份验证的场景,比如社交应用程序中真人实名认证或者账号解封申诉等,再比如银行应用程序中银行账号开户等。
在一个实施例中,在门禁控制系统中,人脸采集摄像头可在摄像头当前的视野下,采集现实场景的图像帧,再将采集得到的图像帧发送至计算机设备,计算机设备在接收到图像帧后,可检测该图像帧中是否存在人脸图像,若存在人脸图像,则获取该图像帧作为目标图像。
S304,提取目标图像中人脸图像的人脸特征数据。
其中,人脸特征数据是用于反映人脸特征的数据。人脸特征数据可以反映出人的性别、人脸的轮廓、发型、眼镜、鼻子、嘴以及各个脸部器官之间的距离等其中的一种或多种特征信息。
在一个实施例中,人脸特征数据可包括面部纹理数据。面部纹理数据可反映面部器官,包括鼻子、耳朵、眉毛、脸颊或嘴唇等的纹理特征与像素点深度。面部纹理数据可以包括面部图像像素点颜色值分布和面部图像像素点亮度值分布。
具体地,服务器在获取目标图像后,可根据预设的图像特征提取策略提取目标图像中人脸图像的人脸特征数据。其中,预设的图像特征提取策略可以是预设的图像特征提取算法或者预先训练完成的特征提取机器学习模型。
S306,根据人脸特征数据进行活体识别,得到第一置信度;该第一置信度表示识别到活体的第一概率。
其中,置信度与目标图像一一对应,用于表示目标图像是活体人脸图像的可信程度。活体人脸图像是对活体进行图像采集得到的图像。置信度越高,表示相应的目标图像是活体人脸图像的概率越高。也就是说,置信度越高,表示在目标图像是对活体进行图像采集得到的图像的概率越大。可以理解的是,这里的第一置信度和后文中的第二置信度均是置信度,但是对应不同的特征数据条件下的置信度。
具体地,服务器可根据提取的人脸特征数据对目标图像进行分类,在提取的人脸特征数据符合活体人脸图像的人脸特征数据时,将目标图像分类至活体人脸图像类。在提取的人脸特征数据符合非活体人脸图像的人脸特征数据时,将目标图像分类至非活体人脸图像类。其中,第一置信度表示提取的人脸特征数据与活体人脸图像的人脸特征数据的符合程度,提取的人脸特征数据与活体人脸图像的人脸特征数据的符合程度越高,则第一置信度越高,也就是说目标图像为活体人脸图像的可能性越大。
在一个实施例中,服务器也可对提取的人脸特征数据进行傅里叶变换,从而在频域空间进行特征分析。在提取的人脸特征数据的频域特征符合活体 人脸图像的人脸特征数据的频域特征时,将目标图像分类至活体人脸图像类。在提取的人脸特征数据的频域特征符合非活体人脸图像的人脸特征数据的频域特征时,将目标图像分类至非活体人脸图像类。
S308,从人脸扩展图像中提取背景特征数据;该人脸扩展图像是扩展人脸图像所在区域得到的。
其中,人脸扩展图像包含人脸图像,是基于人脸图像在目标图像中所在区域扩展得到的区域而截取的图像。人脸扩展图像的尺寸大于人脸图像的尺寸。比如,人脸扩展图像所在的区域可以是将人脸图像所在区域向四个方向分别扩展一倍得到,此时人脸扩展图像的横向尺寸为人脸图像的横向尺寸的三倍,人脸扩展图像的纵向尺寸为人脸图像的纵向尺寸的三倍。可以理解的是,这里不对人脸扩展图像的尺寸与人脸图像的尺寸的比例关系作限定,可随实际应用场景的需要设定,只需要满足人脸扩展图像包含人脸图像,且人脸扩展图像的尺寸大于人脸图像的尺寸即可。
背景特征数据是反映图像中背景部分特征的数据。背景特征数据包括背景图像像素点颜色值分布以及背景图像像素连续性特征等。可以理解的是,由于翻拍照片得到的图像帧是采集二维平面图像得到的图像帧,图像帧中可能存在二维平面图像的边框或者边界,此时图像帧中该边框或者边界处的图像像素不连续,而针对活体采集的图像帧是从现实场景中采集三维立体对象得到的图像帧则不会出现此种情况。
具体地,服务器在获取目标图像后,可根据预设区域扩展方式,得到扩展人脸图像所在区域后形成的人脸扩展图像,再根据预设的图像特征提取策略提取目标图像中人脸扩展图像的背景特征数据。其中,预设区域扩展方式可以是仅一个方向上扩展或者是多个方向上均进行扩展。预设的图像特征提取策略可以是预设的图像特征提取算法或者预先训练完成的特征提取机器学习模型。
在一个实施例中,服务器可仅对人脸扩展图像中人脸图像以外的背景图像提取背景特征数据,也可对人脸扩展图像提取背景特征数据。
S310,根据背景特征数据进行活体识别,得到第二置信度;该第二置信度表示识别到活体的第二概率。
具体地,服务器可根据提取的背景特征数据对目标图像进行分类,在提取的背景特征数据符合活体人脸图像的背景特征数据时,将目标图像分类至活体人脸图像类。在提取的背景特征数据符合非活体人脸图像的背景特征数据时,将目标图像分类至非活体人脸图像类。其中,第二置信度表示提取的背景特征数据与活体人脸图像的背景特征数据的符合程度,提取的背景特征数据与活体人脸图像的背景特征数据的符合程度越高,则第二置信度越高,也就是说目标图像为活体人脸图像的可能性越大。
在一个实施例中,通过预先训练完成的机器学习模型提取的背景特征数据,是机器学习模型在训练过程中经过学习后提取的用于反映活体人脸图像或非活体人脸图像的特征数据。由于翻拍照片得到的图像帧中可能存在照片的边框或者边界,而针对活体采集的图像帧则不存在边框或边界,也就是说,边框或者边界特征能够有效地区分活体人脸图像和非活体人脸图像。那么可以理解的是,机器学习模型所学会的提取的特征数据可以包括边框特征数据或者边界特征数据。
S312,根据第一置信度和第二置信度,得到目标图像为活体人脸图像的识别结果。
具体地,由于第一置信度和第二置信度都是目标图像为活体人脸图像的置信度,而且是根据不同的图像特征分析得到的置信度,因此,服务器可将这两个置信度融合,得到最终的置信度,从而根据最终的置信度得到目标图像是否为活体人脸图像的识别结果。
进一步地,在身份验证场景下,服务器在得到目标图像是否为活体人脸图像的识别结果后,即可根据该识别结果以及人脸识别结果得到身份验证是否通过的验证结果,并执行验证结果相应的操作。这样能在极大程度上保证是用户本人进行的操作。比如,银行应用程序中银行账号开户中,若判定目标图像是活体人脸图像且人脸识别匹配,则身份验证通过,并继续后续的开 户操作。再比如,门禁控制场景下,若判定目标图像是活体人脸图像且人脸识别匹配,则身份验证通过,并输出开门指令。
上述活体识别方法,在获取到目标图像后,一方面可自动对目标图像中人脸图像进行人脸特征数据提取,进而根据人脸特征数据进行活体识别,得到识别到活体的概率,另一方面还可自动对目标图像中人脸扩展图像提取背景特征数据,进而根据背景特征数据进行活体识别,得到识别到活体的概率,这样结合两个概率即可得到目标图像是否为活体人脸图像的识别结果,既在一定程度上保证了活体检测的准确性,又避免了需要用户配合交互带来的耗时,从而提高了活体检测效率。
在一个实施例中,S304包括:确定目标图像中的人脸区域;按照人脸区域在目标图像中截取人脸图像;将人脸图像输入第一识别模型,通过第一识别模型提取人脸图像的人脸特征数据。
其中,人脸区域是人脸在目标图像中位置。具体地,服务器可通过人脸检测算法识别目标图像中的人脸区域。人脸检测算法可根据需要自定义,如可为OpenCV人脸检测算法、IOS、Android系统自带的人脸检测算法或者优图人脸检测算法等。人脸检测算法可以返回目标图像中是否包含人脸以及具体的人脸区域,如通过矩形框标识人脸的位置。服务器在确定目标图像中的人脸区域后,可沿该人脸区域截取目标图像得到人脸图像。在本实施例中,人脸图像可仅包括人脸面部区域的图像。
图4示出了一个实施例中多尺度区域划分的示意图。参考图4左图,该图为通过终端摄像头采集到的目标图像。区域411为人脸区域,按照区域411截取的图像为人脸图像。参考图4右图,该图为门禁控制系统中人脸采集摄像头采集得到的目标图像。区域421为人脸区域,按照区域421截取的图像为人脸图像。
识别模型是经过训练后具有特征提取与特征识别能力的机器学习模型。机器学习英文全称为Machine Learning,简称ML。机器学习模型可通过样本学习具备特征提取与特征识别能力。机器学习模型可采用神经网络模型、支 持向量机或者逻辑回归模型等。可以理解的是,这里的第一识别模型和后文中的第二识别模型均是识别模型,但是提取不同的特征数据的识别模型。
在本实施例中,第一识别模型用于提取目标图像中人脸图像的人脸特征数据。
在一个实施例中,第一识别模型可以是由多层互相连接而形成的复杂网络模型。第一识别模型可包括多层特征提取层,每层特征提取层都有对应的模型参数,每层的模型参数可以是多个,每层特征提取层中的模型参数对输入的图像进行线性或非线性变化,得到特征图(Feature Map)作为运算结果。每个特征提取层接收前一层的运算结果,经过自身的运算,对下一层输出本层的运算结果。其中,模型参数是模型结构中的各个参数,能反应模型各层输出和输入的对应关系。
具体地,服务器在截取到人脸图像后,将人脸图像输入第一识别模型中,第一识别模型中包括的特征提取层逐层对输入的人脸图像进行线性或非线性变化操作,直至第一识别模型中最后一层特征提取层完成线性或非线性变化操作,服务器从而根据第一识别模型最后一层特征提取层输出的结果,得到针对当前输入图像提取的人脸特征数据。
在一个实施例中,第一识别模型可以是已经训练完成的通用的具有特征提取能力的机器学习模型。在将通用的机器学习模型用于特定场景进行提取时效果不佳,因此需要通过专用于特定场景的样本对通用的机器学习模型进行进一步训练和优化。在本实施例中,服务器可获取根据通用的机器学习模型的模型结构和模型参数,并将该模型参数导入第一识别模型结构,得到带有模型参数的第一识别模型。第一识别模型所带的模型参数,作为本实施例中训练第一识别模型的初始参数参与到训练中。
在一个实施例中,第一识别模型也可以是开发人员根据历史模型训练经验初始化的机器学习模型。服务器直接将初始化的机器学习模型中所带的模型参数,作为本实施例中训练第一识别模型的初始参数参与到训练中。其中,第一识别模型的参数初始化可以为高斯随机初始化。
在一个实施例中,将人脸图像输入第一识别模型,通过第一识别模型提取人脸图像的人脸特征数据,包括:将人脸图像输入第一识别模型;通过第一识别模型的卷积层提取人脸图像的人脸特征数据。S306包括:通过第一识别模型的全连接层,根据提取的人脸特征数据对目标图像进行分类,得到目标图像为活体人脸图像的第一置信度。
其中,卷积层是卷积神经网络中的特征提取层。卷积层可以是多层,每层卷积层都有对应的卷积核,每层的卷积核可以是多个。卷积层通过卷积核对输入的图像进行卷积运算,提取图像特征得到特征图作为运算结果。
全连接层(fully connected layers,FC)是卷积神经网络中的特征分类层,用于根据学习到的分布式特征映射关系将提取的特征映射到相应的分类。
具体地,服务器在截取到人脸图像后,将人脸图像输入第一识别模型中,第一识别模型中包括的卷积层逐层对输入的人脸图像进行卷积操作,直至第一识别模型中最后一层卷积层完成卷积操作,再将最后一层卷积层输出的结果作为全连接层的输入,得到目标图像为活体人脸图像的第一置信度。
在一个实施例中,第一置信度可以直接是全连接层输出的目标图像为活体人脸图像的分数。第一置信度也可以是服务器通过回归层(softmax层)将全连接层输出的分数归一化后得到的位于数值范围(0,1)内的数值。此时,第一置信度也可以理解为目标图像为活体人脸图像的概率。
在本实施例中,识别模型的卷积层所输出的特征图,可以更好地反映出对相应输入图像提取的特性,从而可以根据反映特征的特征图采用全连接层分类得到目标图像为活体人脸图像的置信度,并保证识别模型的识别准确性。
图5示出了一个实施例中识别模型使用的示意图。参考图5左图,该图为第一识别模型使用的示意图。服务器获取到目标图像后,从目标图像中截取出人脸图像,将人脸图像输入第一识别模型,第一识别模型的多层卷积层逐层作卷积运算,每个卷积层接收前一层的运算结果,经过自身的运算,对下一层输出本层的运算结果,最后一层卷积层再将运算结果输入全连接层,全连接层输出目标图像为活体人脸图像的分数,回归层(softmax层)再将全 连接层输出的分数归一化后得到的位于数值范围(0,1)内的数值,即第一置信度。
上述实施例中,在目标图像中确定人脸区域后,将人脸区域的图像截取下来,仅将人脸区域的图像作为第一识别模型的输入,这样第一识别模型的人脸特征提取并根据提取的人脸特征数据进行目标图像分类时,可避免非人脸区域图像的噪声干扰,识别效果更好。
在一个实施例中,该活体识别方法还包括:获取图像样本集,图像样本集包括活体人脸图像和非活体人脸图像;根据图像样本集中各图像样本的人脸区域,在相应图像样本中截取人脸图像,得到第一训练样本;根据第一训练样本训练第一识别模型。
其中,图像样本集中包括若干图像样本。图像样本可以是活体人脸图像和非活体人脸图像。活体人脸图像和非活体人脸图像的数量比可以是1:1或者其他比例。
具体地,服务器可从图像样本集中图像样本中截取人脸图像得到第一训练样本。其中,服务器可将从活体人脸图像中截取出的人脸图像作为正训练样本,将从非活体人脸图像中截取出的人脸图像作为负训练样本,通过正负训练样本训练第一识别模型的分类能力,以将目标图像分类为活体人脸图像或者非活体人脸图像。
在一个实施例中,根据第一训练样本训练第一识别模型,包括:获取初始化的第一识别模型;确定第一训练样本相对应的第一训练标签;将第一训练样本输入第一识别模型得到第一识别结果;按照第一识别结果与第一训练标签的差异,调整第一识别模型的模型参数并继续训练,直至满足训练停止条件时结束训练。
具体地,初始化的第一识别模型,可以是将已经训练完成的通用的具有识别能力的机器学习模型的模型参数导入第一识别模型结构,得到带有模型参数的第一识别模型。第一识别模型所带的模型参数,作为本实施例中训练第一识别模型的初始参数参与到训练中。初始化的第一识别模型,也可以是 开发人员根据历史模型训练经验初始化的机器学习模型。服务器直接将初始化的机器学习模型中所带的模型参数,作为本实施例中训练第一识别模型的初始参数参与到训练中。第一识别模型的参数初始化可以为高斯随机初始化。
进一步地,服务器可对每个第一训练样本添加训练标签。训练标签用于表示第一训练样本所截取自的图像样本是否为活体人脸图像。服务器再根据第一训练样本和相应添加的训练标签训练第一识别模型。在具体训练过程中,将第一训练样本输出第一识别模型后,第一识别模型会输出第一识别结果,这时服务器可将第一识别结果,与输入的第一训练样本的训练标签进行对比,并朝向减小差异的方向调整第一识别模型的模型参数。
其中,训练停止条件可以是达到预设迭代次数,也可以是训练出的机器学习模型达到分类性能指标。分类性能指标可以是分类正确率达到第一预设阈值,也可以是分类错误率低于第二预设阈值。
服务器还可从第一训练样本中划分出部分训练样本用作测试样本。测试样本是用于在模型训练后进行模型矫正的样本。采用测试样本对训练得到的第一识别模型进行校准,具体可以是将测试样本输入训练得到的第一识别模型,将该第一识别模型的输出与测试样本的训练标签进行对比,若两者之间的差值落在允许的误差范围内,则完成对第一识别模型的校准,若两者之间的差值落在允许的误差范围外,则对第一识别模型进行参数调整,减少两者之间的差值,以完成对第一识别模型的校准。
服务器还可根据第一识别模型的实际输出和预期输出建立代价函数,采用随机梯度下降法最小化代价函数,更新第一识别模型的模型参数。代价函数比如方差代价函数或者交叉熵代价函数等。
在本实施例中,以活体人脸图像和非活体人脸图像训练第一识别模型,可以根据机器学习模型的分类性能动态地调整模型参数,可以更加准确、高效地完成训练任务。
图6示出了一个实施例中识别模型使用的示意图。参考图6左图,该图为通过第一识别模型训练的示意图。服务器获取到图像样本后,从图像样本 中截取出人脸图像作为第一训练样本,并为第一训练样本添加训练标签。服务器再将第一训练样本输入第一识别模型,第一识别模型的多层卷积层逐层作卷积运算,每个卷积层接收前一层的运算结果,经过自身的运算,对下一层输出本层的运算结果,最后一层卷积层再将运算结果输入全连接层,全连接层输出训练样本的分类结果。服务器再根据分类结果和训练标签的差异建立代价函数,通过最小化代价函数来调整模型参数。
上述实施例中,利用机器学习模型强大的学习和表示能力进行识别能力学习,所训练得到的机器学习模型对目标图像是否为活体人脸图像进行识别,较传统方法对目标图像进行识别的效果更好。
在一个实施例中,S308包括:确定目标图像中的人脸区域;扩展人脸区域得到人脸扩展区域;按照人脸扩展区域在目标图像中截取人脸扩展图像;将人脸扩展图像输入第二识别模型,通过第二识别模型提取人脸扩展图像的背景特征数据。
其中,人脸扩展图像包含人脸图像,是基于人脸图像在目标图像中所在区域扩展得到的区域而截取的图像。人脸扩展图像的尺寸大于人脸图像的尺寸。服务器可预先设置用于扩展得到人脸扩展图像的扩展方式,并在确定目标图像中的人脸区域后,按照该扩展方式扩展得到人脸扩展区域。服务器再沿该人脸扩展区域截图目标图像得到人脸扩展图像。其中,预设区域扩展方式可以是仅一个方向上扩展或者是多个方向上均进行扩展。
在一个实施例中,通过终端摄像头采集的目标图像由于摄像头的视野范围小,可以直接将目标图像作为人脸扩展图像。
参考图4左图,该图为通过终端摄像头采集到的目标图像。区域411为人脸区域,区域412为扩展区域411得到的人脸扩展区域,按照区域412截取的图像为人脸扩展图像。参考图4右图,该图为门禁控制系统中人脸采集摄像头采集得到的目标图像。区域421为人脸区域,区域422为扩展区域421得到的人脸扩展区域,按照区域422截取的图像为人脸扩展图像。
在本实施例中,第二识别模型用于提取目标图像中人脸扩展图像的背景 特征数据。
具体地,服务器在截取到人脸扩展图像后,将人脸扩展图像输入第二识别模型中,第二识别模型中包括的特征提取层逐层对输入的人脸图像进行线性或非线性变化操作,直至第二识别模型中最后一层特征提取层完成线性或非线性变化操作,服务器从而根据第二别模型最后一层特征提取层输出的结果,得到针对当前输入图像提取的背景特征数据。
在一个实施例中,将人脸扩展图像输入第二识别模型,通过第二识别模型提取人脸扩展图像的背景特征数据,包括:将人脸扩展图像输入第二识别模型;通过第二识别模型的卷积层提取人脸扩展图像的背景特征数据。S310包括:通过第二识别模型的全连接层,根据提取的背景特征数据对目标图像进行分类,得到目标图像为活体人脸图像的第二置信度。
具体地,服务器在截取到人脸扩展图像后,将人脸扩展图像输入第二识别模型中,第二识别模型中包括的卷积层逐层对输入的人脸图像进行卷积操作,直至第二识别模型中最后一层卷积层完成卷积操作,再将最后一层卷积层输出的结果作为全连接层的输入,得到目标图像为活体人脸图像的第二置信度。
在一个实施例中,第二置信度可以直接是全连接层输出的目标图像为活体人脸图像的分数。第二置信度也可以是服务器通过回归层(softmax层)将全连接层输出的分数归一化后得到的位于数值范围(0,1)内的数值。此时,第二置信度也可以理解为目标图像为活体人脸图像的概率。
在本实施例中,识别模型的卷积层所输出的特征图,可以更好地反映出对相应输入图像提取的特性,从而可以根据反映特征的特征图采用全连接层分类得到目标图像为活体人脸图像的置信度,并保证识别模型的识别准确性。
参考图5右图,该图为第二识别模型使用的示意图。服务器获取到目标图像后,从目标图像中截取出人脸扩展图像,将人脸扩展图像输入第二识别模型,第二识别模型的多层卷积层逐层作卷积运算,每个卷积层接收前一层的运算结果,经过自身的运算,对下一层输出本层的运算结果,最后一层卷 积层再将运算结果输入全连接层,全连接层输出目标图像为活体人脸图像的分数,回归层(softmax层)再将全连接层输出的分数归一化后得到的位于数值范围(0,1)内的数值,即第二置信度。服务器在得到第一置信度和第二置信度后,可将第一置信度和第二置信度融合得到目标图像为活体人脸图像的置信度。
上述实施例中,对目标图像截取人脸扩展图像,并对人脸扩展图像进行背景特征数据提取,以根据背景特征数据来识别目标图像是否为活体图像,由于背景特征数据包括人脸周边的环境信息,可以有效避免翻拍图片冒充真人时图片边框的影响,提高了识别效果。
在一个实施例中,该活体识别方法还包括:获取图像样本集,图像样本集包括活体人脸图像和非活体人脸图像;根据图像样本集中各图像样本的人脸扩展区域,在相应图像样本中截取人脸扩展图像,得到第二训练样本;根据第二训练样本训练第二识别模型。
具体地,服务器可从图像样本集中图像样本中截取人脸扩展图像得到第二训练样本。其中,服务器可将从活体人脸图像中截取出的人脸扩展图像作为正训练样本,将从非活体人脸图像中截取出的人脸扩展图像作为负训练样本,通过正负训练样本训练第二识别模型的分类能力,以将目标图像分类为活体人脸图像或者非活体人脸图像。
在一个实施例中,根据第二训练样本训练第二识别模型,包括:获取初始化的第二识别模型;确定第二训练样本相对应的第二训练标签;将第二训练样本输入第二识别模型得到第二识别结果;按照第二识别结果与第二训练标签的差异,调整第二识别模型的模型参数并继续训练,直至满足训练停止条件时结束训练。
具体地,第二识别模型可通过上述训练第一识别模型的训练方式进行训练。在实际训练过程中,第一识别模型和第二识别模型相互独立,可以并行训练。
参考图6右图,该图为通过第二识别模型训练的示意图。服务器获取到 图像样本后,从图像样本中截取出人脸扩展图像作为第二训练样本,并为第二训练样本添加训练标签。服务器再将第二训练样本输入第二识别模型,第二识别模型的多层卷积层逐层作卷积运算,每个卷积层接收前一层的运算结果,经过自身的运算,对下一层输出本层的运算结果,最后一层卷积层再将运算结果输入全连接层,全连接层输出训练样本的分类结果。服务器再根据分类结果和训练标签的差异建立代价函数,通过最小化代价函数来调整模型参数。
在一个实施例中,由于第一识别模型与第二识别模型中靠前的卷积层提取的是图像的基本特征,因此,第一识别模型和第二识别模型可共享靠前的卷积层,再分为两个分支分别训练。这样可以提高模型训练效率。
在一个实施例中,服务器还可联合训练第一识别模型和第二识别模型。具体地,服务器可分别获取第一识别模型与第二识别模型最后一层卷积层输出的特征图,将两个模型输出的特征图融合后得到特征融合图,这样特征融合图中既包括了人脸特征数据又包括了背景特征数据,服务器即可将特征融合图作为全连接层的输入,通过全连接层输出目标图像为活体人脸图像的置信度。
在一个实施例中,S302包括:进入图像采集状态;在图像采集状态下,选取采集的图像帧作为目标图像,选取的图像帧的人脸区域与采集视野下的预设人脸区域匹配。
其中,图像采集状态是摄像头进行扫描以进行图像采集的状态。具体地,运行于终端上的应用程序在与相应的服务器交互并需要进行身份验证时,可调用终端内置的摄像头,或者外置的与终端关联的摄像头,在摄像头当前的视野下扫描目标对象,并在摄像头当前的视野下设置预设人脸区域。终端按照预设的帧率采集图像帧,并将采集的图像帧中的人脸区域与预设人脸区域比较,当图像帧的人脸区域与采集视野下的预设人脸区域匹配时,则选取该图像帧作为目标图像进行候选的活体人脸图像判别。
在本实施例中,通过在采集图像帧中对人脸区域尺寸约束,既能避免目 标图像中人脸图像尺寸过小时人脸特征数据的缺失,又能避免目标图像中人脸图像尺寸过大时背景特征数据的缺失,使得识别效果更好。
在一个实施例中,S312包括:融合第一置信度和第二置信度,得到目标图像为活体人脸图像的置信度;当置信度达到预设置信度阈值时,判定目标图像为活体人脸图像。
具体地,服务器可采用预设的融合方式融合第一置信度和第二置信度,得到最终的目标图像为活体人脸图像的置信度。比如,服务器可将第一置信度和第二置信度进行比较,将两个置信度中数值较小的置信度作为最终的置信度。再比如,服务器可对第一置信度和第二置信度求取加权平均值,将计算得到的加权平均值作为最终的置信度。在计算过程中,第一置信度和第二置信度的权重可根据实际场景需要进行调整。在人脸特征数据影响大的场景下第一置信度的权重大于第二置信度的权重。在背景特征数据影响大的场景下则反之。
进一步地,服务器可再将计算得到的最终的置信度与预设置信度阈值进行比较,当最终的置信度达到预设置信度阈值时,则判定目标图像为活体人脸图像,当最终的置信度小于预设置信度阈值时,则判定目标图像为非活体人脸图像。其中,预设置信度阈值是根据经验设定的阈值,并认为高于该阈值的置信度标识可以相信目标图像为活体人脸图像。
在本实施例中,在通过将根据两种图像特征识别得到的置信度融合,综合考虑人脸特征与背景特征的影响,提高了活体识别效果。
如图7所示,在一个具体的实施例中,该活体识别方法具体包括以下步骤:
S702,获取图像样本集,图像样本集包括活体人脸图像和非活体人脸图像。
S704,根据图像样本集中各图像样本的人脸区域,在相应图像样本中截取人脸图像,得到第一训练样本。
S706,获取初始化的第一识别模型;确定第一训练样本相对应的第一训 练标签;将第一训练样本输入第一识别模型得到第一识别结果;按照第一识别结果与第一训练标签的差异,调整第一识别模型的模型参数并继续训练,直至满足训练停止条件时结束训练。
S708,根据图像样本集中各图像样本的人脸扩展区域,在相应图像样本中截取人脸扩展图像,得到第二训练样本。
S710,获取初始化的第二识别模型;确定第二训练样本相对应的第二训练标签;将第二训练样本输入第二识别模型得到第二识别结果;按照第二识别结果与第二训练标签的差异,调整第二识别模型的模型参数并继续训练,直至满足训练停止条件时结束训练。
S712,获取目标图像;确定目标图像中的人脸区域。
S714,按照人脸区域在目标图像中截取人脸图像。
S716,将人脸图像输入第一识别模型;通过第一识别模型的卷积层提取人脸图像的人脸特征数据。
S718,通过第一识别模型的全连接层,根据提取的人脸特征数据对目标图像进行分类,得到目标图像为活体人脸图像的第一置信度。
S720,扩展人脸区域得到人脸扩展区域;按照人脸扩展区域在目标图像中截取人脸扩展图像。
S722,将人脸扩展图像输入第二识别模型;通过第二识别模型的卷积层提取人脸扩展图像的背景特征数据。
S724,通过第二识别模型的全连接层,根据提取的背景特征数据对目标图像进行分类,得到目标图像为活体人脸图像的第二置信度。
S726,融合第一置信度和第二置信度,得到目标图像为活体人脸图像的置信度。
S728,判断置信度是否达到预设置信度阈值;若是,则跳转至步骤S730;若否,则跳转至步骤S732。
S730,判定目标图像为活体人脸图像。
S732,判定目标图像为非活体人脸图像。
本实施例中,在获取到目标图像后,一方面可自动对目标图像中人脸图像进行人脸特征数据提取,进而根据人脸特征数据进行活体识别,得到识别到活体的概率,另一方面还可自动对目标图像中人脸扩展图像提取背景特征数据,进而根据背景特征数据进行活体识别,得到识别到活体的概率,这样结合两个概率即可得到目标图像是否为活体人脸图像的识别结果,既在一定程度上保证了活体检测的准确性,又避免了需要用户配合交互带来的耗时,从而提高了活体检测效率。
应该理解的是,虽然上述各实施例的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,上述各实施例中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
如图8所示,在一个实施例中,提供了一种活体识别装置800。参照图8,该活体识别装置800包括:获取模块801、第一提取模块802、第一识别模块803、第二提取模块804、第二识别模块805和输出模块806。
获取模块801,用于获取目标图像。
第一提取模块802,用于提取目标图像中人脸图像的人脸特征数据。
第一识别模块803,用于根据人脸特征数据进行活体识别,得到第一置信度;第一置信度表示识别到活体的第一概率。
第二提取模块804,用于从人脸扩展图像中提取背景特征数据;人脸扩展图像是扩展人脸图像所在区域得到的。
第二识别模块805,用于根据背景特征数据进行活体识别,得到第二置信度;第二置信度表示识别到活体的第二概率。
输出模块806,用于根据第一置信度和第二置信度,得到目标图像为活 体人脸图像的识别结果。
上述活体识别装置800,在获取到目标图像后,一方面可自动对目标图像中人脸图像进行人脸特征数据提取,进而根据人脸特征数据进行活体识别,得到识别到活体的概率,另一方面还可自动对目标图像中人脸扩展图像提取背景特征数据,进而根据背景特征数据进行活体识别,得到识别到活体的概率,这样结合两个概率即可得到目标图像是否为活体人脸图像的识别结果,既在一定程度上保证了活体检测的准确性,又避免了需要用户配合交互带来的耗时,从而提高了活体检测效率。
在一个实施例中,第一提取模块802还用于确定目标图像中的人脸区域;按照人脸区域在目标图像中截取人脸图像;将人脸图像输入第一识别模型,通过第一识别模型提取人脸图像的人脸特征数据。
在一个实施例中,第一提取模块802还用于将人脸图像输入第一识别模型;通过第一识别模型的卷积层提取人脸图像的人脸特征数据。第一识别模块803还用于通过第一识别模型的全连接层,根据提取的人脸特征数据对目标图像进行分类,得到目标图像为活体人脸图像的第一置信度。
如图9所示,在一个实施例中,活体识别装置800还包括:模型训练模块807。
模型训练模块807,用于获取图像样本集,图像样本集包括活体人脸图像和非活体人脸图像;根据图像样本集中各图像样本的人脸区域,在相应图像样本中截取人脸图像,得到第一训练样本;根据第一训练样本训练第一识别模型。
在一个实施例中,模型训练模块807还用于获取初始化的第一识别模型;确定第一训练样本相对应的第一训练标签;将第一训练样本输入第一识别模型得到第一识别结果;按照第一识别结果与第一训练标签的差异,调整第一识别模型的模型参数并继续训练,直至满足训练停止条件时结束训练。
在一个实施例中,第二提取模块804还用于确定目标图像中的人脸区域;扩展人脸区域得到人脸扩展区域;按照人脸扩展区域在目标图像中截取人脸 扩展图像;将人脸扩展图像输入第二识别模型,通过第二识别模型提取人脸扩展图像的背景特征数据。
在一个实施例中,第二提取模块804还用于将人脸扩展图像输入第二识别模型;通过第二识别模型的卷积层提取人脸扩展图像的背景特征数据。第二识别模块805还用于通过第二识别模型的全连接层,根据提取的背景特征数据对目标图像进行分类,得到目标图像为活体人脸图像的第二置信度。
在一个实施例中,模型训练模块807还用于获取图像样本集,图像样本集包括活体人脸图像和非活体人脸图像;根据图像样本集中各图像样本的人脸扩展区域,在相应图像样本中截取人脸扩展图像,得到第二训练样本;根据第二训练样本训练第二识别模型。
在一个实施例中,模型训练模块807还用于获取初始化的第二识别模型;确定第二训练样本相对应的第二训练标签;将第二训练样本输入第二识别模型得到第二识别结果;按照第二识别结果与第二训练标签的差异,调整第二识别模型的模型参数并继续训练,直至满足训练停止条件时结束训练。
在一个实施例中,获取模块801还用于进入图像采集状态;在图像采集状态下,选取采集的图像帧作为目标图像,选取的图像帧的人脸区域与采集视野下的预设人脸区域匹配。
在一个实施例中,输出模块806还用于融合第一置信度和第二置信度,得到目标图像为活体人脸图像的置信度;当置信度达到预设置信度阈值时,判定目标图像为活体人脸图像。
图10示出了一个实施例中计算机设备的内部结构图。该计算机设备具体可以是图1中的终端110或者图2中的计算机设备220。如图10所示,该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、摄像头和显示屏。其中,存储器包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质存储有操作系统,还可存储有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器实现活体识别方法。该内存储器中也可储存有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理 器执行活体识别方法。计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏等。本领域技术人员可以理解,图10中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
图11示出了一个实施例中计算机设备的内部结构图。该计算机设备具体可以是图1中的服务器120或者图2中的计算机设备220。如图11所示,该计算机设备包括通过系统总线连接的处理器、存储器和网络接口。其中,存储器包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质存储有操作系统,还可存储有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器实现活体识别方法。该内存储器中也可储存有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器执行活体识别方法。本领域技术人员可以理解,图11中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,本申请提供的活体识别装置可以实现为一种计算机可读指令的形式,计算机可读指令可在如图10或图11所示的计算机设备上运行,计算机设备的非易失性存储介质可存储组成该活体识别装置的各个指令模块,比如,图8所示的获取模块801、第一提取模块802、第一识别模块803、第二提取模块804、第二识别模块805和输出模块806等。各个指令模块组成的计算机可读指令使得处理器执行本说明书中描述的本申请各个实施例的活体识别方法中的步骤。
例如,图11所示的计算机设备可以通过如图8所示的活体识别装置800中的获取模块801获取目标图像。通过第一提取模块802提取目标图像中人脸图像的人脸特征数据。通过第一识别模块803根据人脸特征数据进行活体识别,得到第一置信度;第一置信度表示识别到活体的第一概率。通过第二 提取模块804从人脸扩展图像中提取背景特征数据;人脸扩展图像是扩展人脸图像所在区域得到的。通过第二识别模块805根据背景特征数据进行活体识别,得到第二置信度;第二置信度表示识别到活体的第二概率。通过输出模块806根据第一置信度和第二置信度,得到目标图像为活体人脸图像的识别结果。
在一个实施例中,提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时,使得处理器执行以下步骤:获取目标图像;提取目标图像中人脸图像的人脸特征数据;根据人脸特征数据进行活体识别,得到第一置信度;第一置信度表示识别到活体的第一概率;从人脸扩展图像中提取背景特征数据;人脸扩展图像是扩展人脸图像所在区域得到的;根据背景特征数据进行活体识别,得到第二置信度;第二置信度表示识别到活体的第二概率;根据第一置信度和第二置信度,得到目标图像为活体人脸图像的识别结果。
在一个实施例中,提取目标图像中人脸图像的人脸特征数据,包括:确定目标图像中的人脸区域;按照人脸区域在目标图像中截取人脸图像;将人脸图像输入第一识别模型,通过第一识别模型提取人脸图像的人脸特征数据。
在一个实施例中,将人脸图像输入第一识别模型,通过第一识别模型提取人脸图像的人脸特征数据,包括:将人脸图像输入第一识别模型;通过第一识别模型的卷积层提取人脸图像的人脸特征数据。根据人脸特征数据进行活体识别,得到第一置信度,包括:通过第一识别模型的全连接层,根据提取的人脸特征数据对目标图像进行分类,得到目标图像为活体人脸图像的第一置信度。
在一个实施例中,该计算机程序被处理器执行时,还使得处理器执行以下步骤:获取图像样本集,图像样本集包括活体人脸图像和非活体人脸图像;根据图像样本集中各图像样本的人脸区域,在相应图像样本中截取人脸图像,得到第一训练样本;根据第一训练样本训练第一识别模型。
在一个实施例中,根据第一训练样本训练第一识别模型,包括:获取初 始化的第一识别模型;确定第一训练样本相对应的第一训练标签;将第一训练样本输入第一识别模型得到第一识别结果;按照第一识别结果与第一训练标签的差异,调整第一识别模型的模型参数并继续训练,直至满足训练停止条件时结束训练。
在一个实施例中,从人脸扩展图像中提取背景特征数据,包括:确定目标图像中的人脸区域;扩展人脸区域得到人脸扩展区域;按照人脸扩展区域在目标图像中截取人脸扩展图像;将人脸扩展图像输入第二识别模型,通过第二识别模型提取人脸扩展图像的背景特征数据。
在一个实施例中,将人脸扩展图像输入第二识别模型,通过第二识别模型提取人脸扩展图像的背景特征数据,包括:将人脸扩展图像输入第二识别模型;通过第二识别模型的卷积层提取人脸扩展图像的背景特征数据。根据背景特征数据进行活体识别,得到第二置信度,包括:通过第二识别模型的全连接层,根据提取的背景特征数据对目标图像进行分类,得到目标图像为活体人脸图像的第二置信度。
在一个实施例中,该计算机程序被处理器执行时,还使得处理器执行以下步骤:获取图像样本集,图像样本集包括活体人脸图像和非活体人脸图像;根据图像样本集中各图像样本的人脸扩展区域,在相应图像样本中截取人脸扩展图像,得到第二训练样本;根据第二训练样本训练第二识别模型。
在一个实施例中,根据第二训练样本训练第二识别模型,包括:获取初始化的第二识别模型;确定第二训练样本相对应的第二训练标签;将第二训练样本输入第二识别模型得到第二识别结果;按照第二识别结果与第二训练标签的差异,调整第二识别模型的模型参数并继续训练,直至满足训练停止条件时结束训练。
在一个实施例中,获取目标图像,包括:进入图像采集状态;在图像采集状态下,选取采集的图像帧作为目标图像,选取的图像帧的人脸区域与采集视野下的预设人脸区域匹配。
在一个实施例中,根据第一置信度和第二置信度,得到目标图像为活体 人脸图像的识别结果,包括:融合第一置信度和第二置信度,得到目标图像为活体人脸图像的置信度;当置信度达到预设置信度阈值时,判定目标图像为活体人脸图像。
上述存储介质,在获取到目标图像后,一方面可自动对目标图像中人脸图像进行人脸特征数据提取,进而根据人脸特征数据进行活体识别,得到识别到活体的概率,另一方面还可自动对目标图像中人脸扩展图像提取背景特征数据,进而根据背景特征数据进行活体识别,得到识别到活体的概率,这样结合两个概率即可得到目标图像是否为活体人脸图像的识别结果,既在一定程度上保证了活体检测的准确性,又避免了需要用户配合交互带来的耗时,从而提高了活体检测效率。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中储存有计算机程序,计算机程序被处理器执行时,使得处理器执行以下步骤:获取目标图像;提取目标图像中人脸图像的人脸特征数据;根据人脸特征数据进行活体识别,得到第一置信度;第一置信度表示识别到活体的第一概率;从人脸扩展图像中提取背景特征数据;人脸扩展图像是扩展人脸图像所在区域得到的;根据背景特征数据进行活体识别,得到第二置信度;第二置信度表示识别到活体的第二概率;根据第一置信度和第二置信度,得到目标图像为活体人脸图像的识别结果。
在一个实施例中,提取目标图像中人脸图像的人脸特征数据,包括:确定目标图像中的人脸区域;按照人脸区域在目标图像中截取人脸图像;将人脸图像输入第一识别模型,通过第一识别模型提取人脸图像的人脸特征数据。
在一个实施例中,将人脸图像输入第一识别模型,通过第一识别模型提取人脸图像的人脸特征数据,包括:将人脸图像输入第一识别模型;通过第一识别模型的卷积层提取人脸图像的人脸特征数据。根据人脸特征数据进行活体识别,得到第一置信度,包括:通过第一识别模型的全连接层,根据提取的人脸特征数据对目标图像进行分类,得到目标图像为活体人脸图像的第一置信度。
在一个实施例中,该计算机程序被处理器执行时,还使得处理器执行以下步骤:获取图像样本集,图像样本集包括活体人脸图像和非活体人脸图像;根据图像样本集中各图像样本的人脸区域,在相应图像样本中截取人脸图像,得到第一训练样本;根据第一训练样本训练第一识别模型。
在一个实施例中,根据第一训练样本训练第一识别模型,包括:获取初始化的第一识别模型;确定第一训练样本相对应的第一训练标签;将第一训练样本输入第一识别模型得到第一识别结果;按照第一识别结果与第一训练标签的差异,调整第一识别模型的模型参数并继续训练,直至满足训练停止条件时结束训练。
在一个实施例中,从人脸扩展图像中提取背景特征数据,包括:确定目标图像中的人脸区域;扩展人脸区域得到人脸扩展区域;按照人脸扩展区域在目标图像中截取人脸扩展图像;将人脸扩展图像输入第二识别模型,通过第二识别模型提取人脸扩展图像的背景特征数据。
在一个实施例中,将人脸扩展图像输入第二识别模型,通过第二识别模型提取人脸扩展图像的背景特征数据,包括:将人脸扩展图像输入第二识别模型;通过第二识别模型的卷积层提取人脸扩展图像的背景特征数据。根据背景特征数据进行活体识别,得到第二置信度,包括:通过第二识别模型的全连接层,根据提取的背景特征数据对目标图像进行分类,得到目标图像为活体人脸图像的第二置信度。
在一个实施例中,该计算机程序被处理器执行时,还使得处理器执行以下步骤:获取图像样本集,图像样本集包括活体人脸图像和非活体人脸图像;根据图像样本集中各图像样本的人脸扩展区域,在相应图像样本中截取人脸扩展图像,得到第二训练样本;根据第二训练样本训练第二识别模型。
在一个实施例中,根据第二训练样本训练第二识别模型,包括:获取初始化的第二识别模型;确定第二训练样本相对应的第二训练标签;将第二训练样本输入第二识别模型得到第二识别结果;按照第二识别结果与第二训练标签的差异,调整第二识别模型的模型参数并继续训练,直至满足训练停止 条件时结束训练。
在一个实施例中,获取目标图像,包括:进入图像采集状态;在图像采集状态下,选取采集的图像帧作为目标图像,选取的图像帧的人脸区域与采集视野下的预设人脸区域匹配。
在一个实施例中,根据第一置信度和第二置信度,得到目标图像为活体人脸图像的识别结果,包括:融合第一置信度和第二置信度,得到目标图像为活体人脸图像的置信度;当置信度达到预设置信度阈值时,判定目标图像为活体人脸图像。
上述计算机设备,在获取到目标图像后,一方面可自动对目标图像中人脸图像进行人脸特征数据提取,进而根据人脸特征数据进行活体识别,得到识别到活体的概率,另一方面还可自动对目标图像中人脸扩展图像提取背景特征数据,进而根据背景特征数据进行活体识别,得到识别到活体的概率,这样结合两个概率即可得到目标图像是否为活体人脸图像的识别结果,既在一定程度上保证了活体检测的准确性,又避免了需要用户配合交互带来的耗时,从而提高了活体检测效率。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、 以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (23)

  1. 一种活体识别方法,包括:
    计算机设备获取目标图像;
    所述计算机设备提取所述目标图像中人脸图像的人脸特征数据;
    所述计算机设备根据所述人脸特征数据进行活体识别,得到第一置信度;所述第一置信度表示识别到活体的第一概率;
    所述计算机设备从人脸扩展图像中提取背景特征数据;所述人脸扩展图像是扩展所述人脸图像所在区域得到的;
    所述计算机设备根据所述背景特征数据进行活体识别,得到第二置信度;所述第二置信度表示识别到活体的第二概率;及
    所述计算机设备根据所述第一置信度和所述第二置信度,得到所述目标图像为活体人脸图像的识别结果。
  2. 根据权利要求1所述的方法,其特征在于,所述计算机设备提取所述目标图像中人脸图像的人脸特征数据,包括:
    所述计算机设备确定所述目标图像中的人脸区域;
    所述计算机设备按照所述人脸区域在所述目标图像中截取人脸图像;及
    所述计算机设备将所述人脸图像输入第一识别模型,通过所述第一识别模型提取所述人脸图像的人脸特征数据。
  3. 根据权利要求2所述的方法,其特征在于,所述计算机设备将所述人脸图像输入第一识别模型,通过所述第一识别模型提取所述人脸图像的人脸特征数据,包括:
    所述计算机设备将所述人脸图像输入第一识别模型;及
    所述计算机设备通过所述第一识别模型的卷积层提取所述人脸图像的人脸特征数据;
    所述计算机设备根据所述人脸特征数据进行活体识别,得到第一置信度,包括:
    所述计算机设备通过所述第一识别模型的全连接层,根据提取的所述人 脸特征数据对所述目标图像进行分类,得到所述目标图像为活体人脸图像的第一置信度。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    所述计算机设备获取图像样本集,所述图像样本集包括活体人脸图像和非活体人脸图像;
    所述计算机设备根据所述图像样本集中各图像样本的人脸区域,在相应图像样本中截取人脸图像,得到第一训练样本;及
    所述计算机设备根据所述第一训练样本训练第一识别模型。
  5. 根据权利要求4所述的方法,其特征在于,所述计算机设备根据所述第一训练样本训练第一识别模型,包括:
    所述计算机设备获取初始化的第一识别模型;
    所述计算机设备确定所述第一训练样本相对应的第一训练标签;
    所述计算机设备将所述第一训练样本输入所述第一识别模型得到第一识别结果;及
    所述计算机设备按照所述第一识别结果与所述第一训练标签的差异,调整所述第一识别模型的模型参数并继续训练,直至满足训练停止条件时结束训练。
  6. 根据权利要求1所述的方法,其特征在于,所述计算机设备从人脸扩展图像中提取背景特征数据,包括:
    所述计算机设备确定所述目标图像中的人脸区域;
    所述计算机设备扩展所述人脸区域得到人脸扩展区域;
    所述计算机设备按照所述人脸扩展区域在所述目标图像中截取人脸扩展图像;及
    所述计算机设备将所述人脸扩展图像输入第二识别模型,通过所述第二识别模型提取所述人脸扩展图像的背景特征数据。
  7. 根据权利要求6所述的方法,其特征在于,所述计算机设备将所述人脸扩展图像输入第二识别模型,通过所述第二识别模型提取所述人脸扩展图 像的背景特征数据,包括:
    所述计算机设备将所述人脸扩展图像输入第二识别模型;及
    所述计算机设备通过所述第二识别模型的卷积层提取所述人脸扩展图像的背景特征数据;
    所述计算机设备根据所述背景特征数据进行活体识别,得到第二置信度,包括:
    所述计算机设备通过所述第二识别模型的全连接层,根据提取的所述背景特征数据对所述目标图像进行分类,得到所述目标图像为活体人脸图像的第二置信度。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    所述计算机设备获取图像样本集,所述图像样本集包括活体人脸图像和非活体人脸图像;
    所述计算机设备根据所述图像样本集中各图像样本的人脸扩展区域,在相应图像样本中截取人脸扩展图像,得到第二训练样本;及
    所述计算机设备根据所述第二训练样本训练第二识别模型。
  9. 根据权利要求8所述的方法,其特征在于,所述计算机设备根据所述第二训练样本训练第二识别模型,包括:
    所述计算机设备获取初始化的第二识别模型;
    所述计算机设备确定所述第二训练样本相对应的第二训练标签;
    所述计算机设备将所述第二训练样本输入所述第二识别模型得到第二识别结果;及
    所述计算机设备按照所述第二识别结果与所述第二训练标签的差异,调整所述第二识别模型的模型参数并继续训练,直至满足训练停止条件时结束训练。
  10. 根据权利要求1所述的方法,其特征在于,所述计算机设备获取目标图像,包括:
    所述计算机设备进入图像采集状态;及
    所述计算机设备在所述图像采集状态下,选取采集的图像帧作为目标图像,选取的所述图像帧的人脸区域与采集视野下的预设人脸区域匹配。
  11. 根据权利要求1所述的方法,其特征在于,所述计算机设备根据所述第一置信度和所述第二置信度,得到所述目标图像为活体人脸图像的识别结果,包括:
    所述计算机设备融合所述第一置信度和所述第二置信度,得到所述目标图像为活体人脸图像的置信度;及
    所述计算机设备当所述置信度达到预设置信度阈值时,判定所述目标图像为活体人脸图像。
  12. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:
    获取目标图像;
    提取所述目标图像中人脸图像的人脸特征数据;
    根据所述人脸特征数据进行活体识别,得到第一置信度;所述第一置信度表示识别到活体的第一概率;
    从人脸扩展图像中提取背景特征数据;所述人脸扩展图像是扩展所述人脸图像所在区域得到的;
    根据所述背景特征数据进行活体识别,得到第二置信度;所述第二置信度表示识别到活体的第二概率;及
    根据所述第一置信度和所述第二置信度,得到所述目标图像为活体人脸图像的识别结果。
  13. 根据权利要求12所述的计算机设备,其特征在于,所述提取所述目标图像中人脸图像的人脸特征数据,包括:
    确定所述目标图像中的人脸区域;
    按照所述人脸区域在所述目标图像中截取人脸图像;及
    将所述人脸图像输入第一识别模型,通过所述第一识别模型提取所述人 脸图像的人脸特征数据。
  14. 根据权利要求13所述的计算机设备,其特征在于,所述将所述人脸图像输入第一识别模型,通过所述第一识别模型提取所述人脸图像的人脸特征数据,包括:
    将所述人脸图像输入第一识别模型;及
    通过所述第一识别模型的卷积层提取所述人脸图像的人脸特征数据;
    所述根据所述人脸特征数据进行活体识别,得到第一置信度,包括:
    通过所述第一识别模型的全连接层,根据提取的所述人脸特征数据对所述目标图像进行分类,得到所述目标图像为活体人脸图像的第一置信度。
  15. 根据权利要求14所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行时,还使得所述处理器执行以下步骤:
    获取图像样本集,所述图像样本集包括活体人脸图像和非活体人脸图像;
    根据所述图像样本集中各图像样本的人脸区域,在相应图像样本中截取人脸图像,得到第一训练样本;及
    根据所述第一训练样本训练第一识别模型。
  16. 根据权利要求15所述的计算机设备,其特征在于,所述根据所述第一训练样本训练第一识别模型,包括:
    获取初始化的第一识别模型;
    确定所述第一训练样本相对应的第一训练标签;
    将所述第一训练样本输入所述第一识别模型得到第一识别结果;及
    按照所述第一识别结果与所述第一训练标签的差异,调整所述第一识别模型的模型参数并继续训练,直至满足训练停止条件时结束训练。
  17. 根据权利要求12所述的计算机设备,其特征在于,所述从人脸扩展图像中提取背景特征数据,包括:
    确定所述目标图像中的人脸区域;
    扩展所述人脸区域得到人脸扩展区域;
    按照所述人脸扩展区域在所述目标图像中截取人脸扩展图像;及
    将所述人脸扩展图像输入第二识别模型,通过所述第二识别模型提取所述人脸扩展图像的背景特征数据。
  18. 根据权利要求17所述的计算机设备,其特征在于,所述将所述人脸扩展图像输入第二识别模型,通过所述第二识别模型提取所述人脸扩展图像的背景特征数据,包括:
    将所述人脸扩展图像输入第二识别模型;及
    通过所述第二识别模型的卷积层提取所述人脸扩展图像的背景特征数据;
    设备根据所述背景特征数据进行活体识别,得到第二置信度,包括:
    通过所述第二识别模型的全连接层,根据提取的所述背景特征数据对所述目标图像进行分类,得到所述目标图像为活体人脸图像的第二置信度。
  19. 根据权利要求18所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行时,还使得所述处理器执行以下步骤:
    获取图像样本集,所述图像样本集包括活体人脸图像和非活体人脸图像;
    根据所述图像样本集中各图像样本的人脸扩展区域,在相应图像样本中截取人脸扩展图像,得到第二训练样本;及
    根据所述第二训练样本训练第二识别模型。
  20. 根据权利要求19所述的计算机设备,其特征在于,所述根据所述第二训练样本训练第二识别模型,包括:
    获取初始化的第二识别模型;
    确定所述第二训练样本相对应的第二训练标签;
    将所述第二训练样本输入所述第二识别模型得到第二识别结果;及
    按照所述第二识别结果与所述第二训练标签的差异,调整所述第二识别模型的模型参数并继续训练,直至满足训练停止条件时结束训练。
  21. 根据权利要求12所述的计算机设备,其特征在于,所述获取目标图像,包括:
    进入图像采集状态;及
    在所述图像采集状态下,选取采集的图像帧作为目标图像,选取的所述图像帧的人脸区域与采集视野下的预设人脸区域匹配。
  22. 根据权利要求12所述的计算机设备,其特征在于,所述根据所述第一置信度和所述第二置信度,得到所述目标图像为活体人脸图像的识别结果,包括:
    融合所述第一置信度和所述第二置信度,得到所述目标图像为活体人脸图像的置信度;及
    当所述置信度达到预设置信度阈值时,判定所述目标图像为活体人脸图像。
  23. 一种存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被计算机设备的一个或多个处理器执行时,使得计算机设备的一个或多个处理器执行上述1至11中任一项所述的方法的步骤。
PCT/CN2018/114096 2017-11-20 2018-11-06 活体识别方法、存储介质和计算机设备 WO2019096029A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/864,103 US11176393B2 (en) 2017-11-20 2020-04-30 Living body recognition method, storage medium, and computer device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711159039.8A CN107818313B (zh) 2017-11-20 2017-11-20 活体识别方法、装置和存储介质
CN201711159039.8 2017-11-20

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/864,103 Continuation US11176393B2 (en) 2017-11-20 2020-04-30 Living body recognition method, storage medium, and computer device

Publications (1)

Publication Number Publication Date
WO2019096029A1 true WO2019096029A1 (zh) 2019-05-23

Family

ID=61608691

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/114096 WO2019096029A1 (zh) 2017-11-20 2018-11-06 活体识别方法、存储介质和计算机设备

Country Status (3)

Country Link
US (1) US11176393B2 (zh)
CN (1) CN107818313B (zh)
WO (1) WO2019096029A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178137A (zh) * 2019-12-04 2020-05-19 百度在线网络技术(北京)有限公司 检测真实人脸方法、装置、电子设备以及计算机可读存储介质
CN112183613A (zh) * 2020-09-24 2021-01-05 杭州睿琪软件有限公司 对象识别方法和设备与非暂态计算机可读存储介质
CN113221767A (zh) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 训练活体人脸识别模型、识别活体人脸的方法及相关装置
CN113221766A (zh) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 训练活体人脸识别模型、识别活体人脸的方法及相关装置
CN114463801A (zh) * 2021-10-26 2022-05-10 马上消费金融股份有限公司 一种模型训练方法、活体检测方法、装置和电子设备
CN115223022A (zh) * 2022-09-15 2022-10-21 平安银行股份有限公司 一种图像处理方法、装置、存储介质及设备

Families Citing this family (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101977174B1 (ko) 2017-09-13 2019-05-10 이재준 영상 분석 방법, 장치 및 컴퓨터 프로그램
CN107818313B (zh) * 2017-11-20 2019-05-14 腾讯科技(深圳)有限公司 活体识别方法、装置和存储介质
CN108491794B (zh) * 2018-03-22 2023-04-07 腾讯科技(深圳)有限公司 面部识别的方法和装置
CN108416595A (zh) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 信息处理方法和装置
CN108665457B (zh) * 2018-05-16 2023-12-19 腾讯医疗健康(深圳)有限公司 图像识别方法、装置、存储介质及计算机设备
CN108846321B (zh) * 2018-05-25 2022-05-03 北京小米移动软件有限公司 识别人脸假体的方法及装置、电子设备
CN108897786B (zh) * 2018-06-08 2021-06-08 Oppo广东移动通信有限公司 应用程序的推荐方法、装置、存储介质及移动终端
CN108986245A (zh) * 2018-06-14 2018-12-11 深圳市商汤科技有限公司 基于人脸识别的考勤方法及终端
CN108809992B (zh) * 2018-06-15 2021-07-13 黄玉新 一种人脸识别验证系统及其与目标系统的关联方法
CN117710647A (zh) * 2018-06-20 2024-03-15 祖克斯有限公司 从机器学习模型输出推断出的实例分割
CN108875676B (zh) 2018-06-28 2021-08-10 北京旷视科技有限公司 活体检测方法、装置及系统
CN108984657B (zh) * 2018-06-28 2020-12-01 Oppo广东移动通信有限公司 图像推荐方法和装置、终端、可读存储介质
CN108900769B (zh) * 2018-07-16 2020-01-10 Oppo广东移动通信有限公司 图像处理方法、装置、移动终端及计算机可读存储介质
CN108810418B (zh) 2018-07-16 2020-09-11 Oppo广东移动通信有限公司 图像处理方法、装置、移动终端及计算机可读存储介质
CN109063774B (zh) * 2018-08-03 2021-01-12 百度在线网络技术(北京)有限公司 图像追踪效果评价方法、装置、设备及可读存储介质
CN109034102B (zh) * 2018-08-14 2023-06-16 腾讯科技(深圳)有限公司 人脸活体检测方法、装置、设备及存储介质
CN109784148A (zh) * 2018-12-06 2019-05-21 北京飞搜科技有限公司 活体检测方法及装置
CN111325067B (zh) * 2018-12-14 2023-07-07 北京金山云网络技术有限公司 违规视频的识别方法、装置及电子设备
CN109816200B (zh) * 2018-12-17 2023-11-28 平安国际融资租赁有限公司 任务推送方法、装置、计算机设备和存储介质
CN109766764A (zh) * 2018-12-17 2019-05-17 平安普惠企业管理有限公司 人脸识别数据处理方法、装置、计算机设备和存储介质
CN111435452B (zh) * 2019-01-11 2023-11-03 百度在线网络技术(北京)有限公司 模型训练方法、装置、设备和介质
CN109886275A (zh) * 2019-01-16 2019-06-14 深圳壹账通智能科技有限公司 翻拍图像识别方法、装置、计算机设备和存储介质
CN110069983A (zh) * 2019-03-08 2019-07-30 深圳神目信息技术有限公司 基于显示媒质的活体识别方法、装置、终端及可读介质
CN110414200B (zh) * 2019-04-08 2021-07-23 广州腾讯科技有限公司 身份验证方法、装置、存储介质和计算机设备
CN110135259A (zh) * 2019-04-15 2019-08-16 深圳壹账通智能科技有限公司 静默式活体图片识别方法、装置、计算机设备和存储介质
CN111860055B (zh) * 2019-04-29 2023-10-24 北京眼神智能科技有限公司 人脸静默活体检测方法、装置、可读存储介质及设备
CN111967289A (zh) * 2019-05-20 2020-11-20 高新兴科技集团股份有限公司 一种非配合式人脸活体检测方法及计算机存储介质
CN110232353B (zh) * 2019-06-12 2023-06-06 成都世纪光合作用科技有限公司 一种获取场景人员深度位置的方法和装置
CN110378219B (zh) * 2019-06-13 2021-11-19 北京迈格威科技有限公司 活体检测方法、装置、电子设备及可读存储介质
CN110245645B (zh) * 2019-06-21 2021-06-08 北京字节跳动网络技术有限公司 人脸活体识别方法、装置、设备及存储介质
CN110462633B (zh) * 2019-06-27 2023-05-26 深圳市汇顶科技股份有限公司 一种人脸识别的方法、装置和电子设备
CN110309767A (zh) * 2019-06-28 2019-10-08 广州致远电子有限公司 活体检测设备、识别方法、装置及存储介质
CN110363116B (zh) * 2019-06-28 2021-07-23 上海交通大学 基于gld-gan的不规则人脸矫正方法、系统及介质
CN112215045A (zh) * 2019-07-12 2021-01-12 普天信息技术有限公司 一种活体检测方法和装置
CN110490076B (zh) * 2019-07-18 2024-03-01 平安科技(深圳)有限公司 活体检测方法、装置、计算机设备和存储介质
CN110705392A (zh) * 2019-09-17 2020-01-17 Oppo广东移动通信有限公司 一种人脸图像检测方法及装置、存储介质
CN110765924B (zh) * 2019-10-18 2024-08-27 腾讯科技(深圳)有限公司 一种活体检测方法、装置以及计算机可读存储介质
TWI731503B (zh) * 2019-12-10 2021-06-21 緯創資通股份有限公司 活體臉部辨識系統與方法
CN111178341B (zh) * 2020-04-10 2021-01-26 支付宝(杭州)信息技术有限公司 一种活体检测方法、装置及设备
CN111507262B (zh) * 2020-04-17 2023-12-08 北京百度网讯科技有限公司 用于检测活体的方法和装置
CN111597944B (zh) * 2020-05-11 2022-11-15 腾讯科技(深圳)有限公司 活体检测方法、装置、计算机设备及存储介质
US11741606B2 (en) * 2020-07-02 2023-08-29 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body after removing hair for determining a user-specific hair removal efficiency value
CN112085701B (zh) * 2020-08-05 2024-06-11 深圳市优必选科技股份有限公司 一种人脸模糊度检测方法、装置、终端设备及存储介质
CN112084858A (zh) * 2020-08-05 2020-12-15 广州虎牙科技有限公司 对象识别方法和装置、电子设备及存储介质
CN112115831B (zh) * 2020-09-10 2024-03-15 深圳印像数据科技有限公司 活体检测图像预处理方法
CN112287830A (zh) * 2020-10-29 2021-01-29 泰康保险集团股份有限公司 一种图像的检测方法及装置
CN112347904B (zh) * 2020-11-04 2023-08-01 杭州锐颖科技有限公司 基于双目深度和图片结构的活体检测方法、装置及介质
CN112329624A (zh) * 2020-11-05 2021-02-05 北京地平线信息技术有限公司 活体检测方法和装置、存储介质、电子设备
CN112270288A (zh) * 2020-11-10 2021-01-26 深圳市商汤科技有限公司 活体识别、门禁设备控制方法和装置、电子设备
CN112580472A (zh) * 2020-12-11 2021-03-30 云从科技集团股份有限公司 一种快速轻量的人脸识别方法、装置、机器可读介质及设备
JP6956986B1 (ja) * 2020-12-22 2021-11-02 株式会社スワローインキュベート 判定方法、判定装置、及び判定プログラム
CN112560742A (zh) * 2020-12-23 2021-03-26 杭州趣链科技有限公司 基于多尺度局部二值模式的人脸活体检测方法、装置及设备
CN112733669A (zh) * 2020-12-30 2021-04-30 中国移动通信集团江苏有限公司 人工智能ai的识别方法、装置、设备及计算机存储介质
CN113158773B (zh) * 2021-03-05 2024-03-22 普联技术有限公司 一种活体检测模型的训练方法及训练装置
CN113420597A (zh) * 2021-05-24 2021-09-21 北京三快在线科技有限公司 环形交叉口识别方法及装置,电子设备及存储介质
CN113496215B (zh) * 2021-07-07 2024-07-02 浙江大华技术股份有限公司 一种活体人脸检测的方法、装置及电子设备
CN114550244A (zh) * 2022-02-11 2022-05-27 支付宝(杭州)信息技术有限公司 一种活体检测方法、装置及设备
CN114627534B (zh) * 2022-03-15 2024-09-13 平安科技(深圳)有限公司 活体判别方法及电子设备、存储介质
CN115147705B (zh) * 2022-09-06 2023-02-03 平安银行股份有限公司 人脸翻拍检测方法、装置、电子设备及存储介质
CN115512428B (zh) * 2022-11-15 2023-05-23 华南理工大学 一种人脸活体判别方法、系统、装置和存储介质
CN117723514B (zh) * 2024-02-07 2024-05-17 山西品东智能控制有限公司 基于钠光源和智能算法的多指标煤质分析仪及检测方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389553A (zh) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 一种活体检测方法和装置
CN105518708A (zh) * 2015-04-29 2016-04-20 北京旷视科技有限公司 用于验证活体人脸的方法、设备和计算机程序产品
CN106096519A (zh) * 2016-06-01 2016-11-09 腾讯科技(深圳)有限公司 活体鉴别方法及装置
CN106897675A (zh) * 2017-01-24 2017-06-27 上海交通大学 双目视觉深度特征与表观特征相结合的人脸活体检测方法
CN107292267A (zh) * 2017-06-21 2017-10-24 北京市威富安防科技有限公司 照片造假卷积神经网络训练方法及人脸活体检测方法
CN107818313A (zh) * 2017-11-20 2018-03-20 腾讯科技(深圳)有限公司 活体识别方法、装置、存储介质和计算机设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104766063B (zh) * 2015-04-08 2018-01-05 宁波大学 一种活体人脸识别方法
CN106778518B (zh) * 2016-11-24 2021-01-08 汉王科技股份有限公司 一种人脸活体检测方法及装置
CN107220635A (zh) * 2017-06-21 2017-09-29 北京市威富安防科技有限公司 基于多造假方式的人脸活体检测方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105518708A (zh) * 2015-04-29 2016-04-20 北京旷视科技有限公司 用于验证活体人脸的方法、设备和计算机程序产品
CN105389553A (zh) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 一种活体检测方法和装置
CN106096519A (zh) * 2016-06-01 2016-11-09 腾讯科技(深圳)有限公司 活体鉴别方法及装置
CN106897675A (zh) * 2017-01-24 2017-06-27 上海交通大学 双目视觉深度特征与表观特征相结合的人脸活体检测方法
CN107292267A (zh) * 2017-06-21 2017-10-24 北京市威富安防科技有限公司 照片造假卷积神经网络训练方法及人脸活体检测方法
CN107818313A (zh) * 2017-11-20 2018-03-20 腾讯科技(深圳)有限公司 活体识别方法、装置、存储介质和计算机设备

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178137A (zh) * 2019-12-04 2020-05-19 百度在线网络技术(北京)有限公司 检测真实人脸方法、装置、电子设备以及计算机可读存储介质
CN111178137B (zh) * 2019-12-04 2023-05-26 百度在线网络技术(北京)有限公司 检测真实人脸方法、装置、电子设备以及计算机可读存储介质
CN112183613A (zh) * 2020-09-24 2021-01-05 杭州睿琪软件有限公司 对象识别方法和设备与非暂态计算机可读存储介质
CN112183613B (zh) * 2020-09-24 2024-03-22 杭州睿琪软件有限公司 对象识别方法和设备与非暂态计算机可读存储介质
CN113221767A (zh) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 训练活体人脸识别模型、识别活体人脸的方法及相关装置
CN113221766A (zh) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 训练活体人脸识别模型、识别活体人脸的方法及相关装置
CN113221767B (zh) * 2021-05-18 2023-08-04 北京百度网讯科技有限公司 训练活体人脸识别模型、识别活体人脸的方法及相关装置
CN114463801A (zh) * 2021-10-26 2022-05-10 马上消费金融股份有限公司 一种模型训练方法、活体检测方法、装置和电子设备
CN114463801B (zh) * 2021-10-26 2024-09-24 马上消费金融股份有限公司 一种模型训练方法、活体检测方法、装置和电子设备
CN115223022A (zh) * 2022-09-15 2022-10-21 平安银行股份有限公司 一种图像处理方法、装置、存储介质及设备
CN115223022B (zh) * 2022-09-15 2022-12-09 平安银行股份有限公司 一种图像处理方法、装置、存储介质及设备

Also Published As

Publication number Publication date
US20200257914A1 (en) 2020-08-13
CN107818313A (zh) 2018-03-20
CN107818313B (zh) 2019-05-14
US11176393B2 (en) 2021-11-16

Similar Documents

Publication Publication Date Title
WO2019096029A1 (zh) 活体识别方法、存储介质和计算机设备
US11727720B2 (en) Face verification method and apparatus
WO2021077984A1 (zh) 对象识别方法、装置、电子设备及可读存储介质
WO2022206319A1 (zh) 图像处理方法、装置、设备、存储介质计算机程序产品
CN109359548B (zh) 多人脸识别监控方法及装置、电子设备及存储介质
CN108009528B (zh) 基于Triplet Loss的人脸认证方法、装置、计算机设备和存储介质
CN109948408B (zh) 活性测试方法和设备
CN112215180B (zh) 一种活体检测方法及装置
WO2018188453A1 (zh) 人脸区域的确定方法、存储介质、计算机设备
CN111767900B (zh) 人脸活体检测方法、装置、计算机设备及存储介质
TWI766201B (zh) 活體檢測方法、裝置以及儲存介質
KR20210122855A (ko) 검출 모델 훈련 방법과 장치, 컴퓨터 장치, 및 저장 매체
WO2017101267A1 (zh) 人脸活体的鉴别方法、终端、服务器和存储介质
CN110223322B (zh) 图像识别方法、装置、计算机设备和存储介质
US9892315B2 (en) Systems and methods for detection of behavior correlated with outside distractions in examinations
WO2022100337A1 (zh) 人脸图像质量评估方法、装置、计算机设备及存储介质
CN112052831B (zh) 人脸检测的方法、装置和计算机存储介质
US20230045306A1 (en) Face liveness detection method, system, apparatus, computer device, and storage medium
CN113436735A (zh) 基于人脸结构度量的体重指数预测方法、设备和存储介质
WO2024183465A1 (zh) 一种模型确定方法和相关装置
WO2024183465A9 (zh) 一种模型确定方法和相关装置
US11087121B2 (en) High accuracy and volume facial recognition on mobile platforms
CN114627534A (zh) 活体判别方法及电子设备、存储介质
WO2024212681A1 (zh) 识别结果的确定方法、装置、设备及存储介质
Thorsen et al. Assessing face image quality with LSTMs

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18879302

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18879302

Country of ref document: EP

Kind code of ref document: A1