WO2023098128A1 - 活体检测方法及装置、活体检测系统的训练方法及装置 - Google Patents

活体检测方法及装置、活体检测系统的训练方法及装置 Download PDF

Info

Publication number
WO2023098128A1
WO2023098128A1 PCT/CN2022/110111 CN2022110111W WO2023098128A1 WO 2023098128 A1 WO2023098128 A1 WO 2023098128A1 CN 2022110111 W CN2022110111 W CN 2022110111W WO 2023098128 A1 WO2023098128 A1 WO 2023098128A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
image
living body
body detection
face
Prior art date
Application number
PCT/CN2022/110111
Other languages
English (en)
French (fr)
Inventor
杨杰之
周迅溢
曾定衡
Original Assignee
马上消费金融股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 马上消费金融股份有限公司 filed Critical 马上消费金融股份有限公司
Priority to US18/568,910 priority Critical patent/US20240282149A1/en
Priority to EP22899948.8A priority patent/EP4345777A1/en
Publication of WO2023098128A1 publication Critical patent/WO2023098128A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the technical field of face biopsy detection, and in particular to a biopsy detection method and device, a biopsy detection system training method and device, electronic equipment and a storage medium.
  • face liveness detection technology has become a key step in face recognition technology.
  • the detection results obtained through face liveness detection are not accurate enough, and there is a risk of recognizing the prosthetic face as a live face.
  • the present application proposes a living body detection method, device, electronic equipment and storage medium, which can solve the above problems.
  • an embodiment of the present application provides a living body detection method, including: acquiring a first target image acquired by a first sensor on a face to be recognized, and acquiring a first target image acquired by a second sensor on a face to be identified Two target images; use the pre-trained depth generation network to extract the target depth information from the first target image and the second target image; detect the target depth information through the pre-trained live detection model, and obtain the live detection of the face to be recognized
  • the living body detection model is obtained by training depth information extracted from sample data
  • the sample data includes a first sample image collected by the first sensor and a second sample image collected by the second sensor under at least two lighting environments , wherein both the first sample image and the second sample image include prosthetic faces of different materials.
  • an embodiment of the present application provides a training method for a living body detection system.
  • the living body detection system includes a deep generation network and a living body detection model.
  • the first sample image obtained by face collection and the second sample image obtained by the second sensor on the sample face, wherein the sample face includes prosthetic faces of different materials; the first sample image and the second sample image
  • the two sample images are input into the initial generation network to train the initial generation network to obtain a deep generation network; use the depth generation network to extract the depth information of the sample face from the first sample image and the second sample image; the sample face Input the depth information of the neural network model into the neural network model to train the neural network model to obtain the living body detection model.
  • an embodiment of the present application provides a living body detection device, the device including: an image acquisition module, a depth generation module, and a living body detection module.
  • the image acquiring module is used to acquire the first target image acquired by the first sensor for the face to be recognized, and the second target image acquired by the second sensor for the face to be identified;
  • the trained depth generation network extracts the target depth information from the first target image and the second target image;
  • the living body detection module is used to detect the target depth information through the pre-trained living body detection model to obtain the living body of the face to be recognized Detection results, wherein the living body detection model is obtained by training the depth information extracted from the sample data, and the sample data includes the first sample image collected by the first sensor and the second sample collected by the second sensor under at least two lighting environments images, wherein both the first sample image and the second sample image include prosthetic human faces made of different materials.
  • an embodiment of the present application provides a training device for a living body detection system.
  • the living body detection system includes a deep generation network and a living body detection model.
  • the first sample image acquired by the first sensor on the sample face and the second sample image obtained by the second sensor on the sample face, wherein the sample face includes prosthetic faces of different materials;
  • network training The module is used to input the first sample image and the second sample image into the initial generation network to train the initial generation network to obtain a deep generation network;
  • the depth extraction module is used to use the depth generation network to generate the network from the first sample image and extracting the depth information of the sample face from the second sample image;
  • the model training module is used to input the depth information of the sample face into the neural network model to train the neural network model to obtain a living body detection model.
  • an embodiment of the present application provides an electronic device, including: one or more processors; memory; one or more application programs, wherein the one or more application programs are stored in the memory and configured as Executed by one or more processors, one or more application programs are configured to execute the above-mentioned living body detection method or training method of a living body detection system.
  • the embodiment of the present application provides a computer-readable storage medium, in which program code is stored, and the program code can be invoked by a processor to execute the above-mentioned living body detection method or the training method of the living body detection system.
  • the embodiment of the present application provides a computer program product containing instructions, which is characterized in that the computer program product stores instructions, and when it is run on a computer, the computer can realize the above-mentioned living body detection method or living body detection system training method.
  • This application can obtain two target images collected by two sensors for the same face to be recognized, and use the pre-trained depth generation network to extract the target depth information (that is, the depth information of the face to be recognized) based on the two target images. ), and then use the pre-trained living detection model to detect according to the target depth information, and obtain the living detection result of the face to be recognized.
  • the living body detection model is trained by depth information extracted from sample data, and the sample data includes, under at least two lighting environments, a first sample image collected by the first sensor and a second sample image collected by the second sensor, and Both the first sample image and the second sample image include prosthetic human faces made of different materials.
  • the technical solution of the present application can quickly obtain the depth information of the face according to the two images of the same face to be recognized by using the neural network, and determine the living body detection result of the face to be recognized according to the depth information, and then Realize efficient and high-accuracy liveness detection.
  • the application can recognize prosthetic human faces under different lighting environments, so that the accuracy of living body detection is higher.
  • FIG. 1 shows a schematic diagram of an application environment of a living body detection method provided by an embodiment of the present application
  • Fig. 2 shows a schematic diagram of an application scenario of a living body detection method provided by an embodiment of the application
  • Fig. 3 shows a schematic flow chart of a living body detection method provided by an embodiment of the present application
  • Fig. 4 shows a schematic diagram of imaging of the first sensor and the second sensor provided by an embodiment of the present application
  • FIG. 5 shows a schematic diagram of a processing flow of a living body detection system provided by an embodiment of the present application
  • Fig. 6 shows a schematic flow chart of a living body detection method provided by another embodiment of the present application.
  • FIG. 7 shows a schematic diagram of an extraction process of target depth information provided by an embodiment of the present application.
  • FIG. 8 shows a schematic diagram of a processing process of central convolution provided by an embodiment of the present application.
  • FIG. 9 shows a schematic flowchart of a training method of a living body detection system provided by an embodiment of the present application.
  • Fig. 10 shows a schematic flow chart of the training process of the deep generation network in the living body detection system provided by an embodiment of the present application
  • Fig. 11 shows a schematic diagram of a stereo matching algorithm provided by an embodiment of the present application.
  • Fig. 12 shows a schematic flow chart of the training process of the living body detection model in the living body detection system provided by an embodiment of the present application
  • Fig. 13 shows a schematic diagram of the processing flow of the training device of the living body detection system provided by an embodiment of the present application
  • Fig. 14 shows a module block diagram of a living body detection device provided by an embodiment of the present application.
  • Fig. 15 shows a module block diagram of a training device of a living body detection system provided by an embodiment of the present application
  • Fig. 16 shows a structural block diagram of an electronic device provided by an embodiment of the present application.
  • Fig. 17 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • the inventors of the present application found after careful research that the depth generation network pre-trained can be used to extract the target depth information from the two images collected by the two sensors, and then the target depth information can be extracted using the pre-trained living body detection model. Liveness detection based on information can obtain more accurate detection results without increasing hardware costs.
  • FIG. 1 shows a schematic diagram of an application environment of a living body detection method provided by an embodiment of the present application.
  • the living body detection method and the training method of the living body detection system provided in the embodiments of the present application may be applied to electronic devices.
  • the electronic device may be, for example, the server 110 shown in FIG. 1 , and the server 110 may be connected to the image acquisition device 120 through a network.
  • the network is a medium used to provide a communication link between the server 110 and the image acquisition device 120 .
  • the network may include various connection types, such as wired communication links, wireless communication links, etc., which are not limited in this embodiment of the present application.
  • image capture device 120 may include a first sensor and a second sensor.
  • the user's face images can be collected by the first sensor and the second sensor, and then the collected face images can be sent to the server 110 through the network.
  • the server may perform liveness detection on the user based on these face images through the liveness detection method described in the embodiment of the present application.
  • these face images may include a first target image collected by the first sensor for the user, and a second target image collected by the second sensor for the user.
  • the server 110, the network and the image acquisition device 120 in FIG. 1 are only schematic. According to the realization requirements, there may be any number of servers, networks and image acquisition devices.
  • the server 110 may be a physical server, or may be a server cluster composed of multiple servers
  • the image acquisition device 120 may be a mobile phone, a tablet, a camera, a notebook computer, and the like. It can be understood that the embodiment of the present application may also allow multiple image acquisition devices 120 to access the server 110 at the same time.
  • the electronic device may also be a smart phone, a tablet, a notebook computer, and the like.
  • the image acquisition device 120 may be integrated into an electronic device, for example, an electronic device such as a smart phone, a tablet, or a notebook computer may be equipped with two sensors.
  • the electronic device can collect a user's face image through these two sensors, and then perform liveness detection locally based on the collected face image.
  • the living body detection passes.
  • the user can continue to be further authenticated, and the collected face image and the detection result of the living body detection can also be displayed synchronously on the display interface of the electronic device.
  • the living body detection method and device, the living body detection system training method and device, electronic equipment and storage media provided by the present application will be described in detail below through specific embodiments.
  • Fig. 3 shows a schematic flowchart of a living body detection method provided by an embodiment of the present application. As shown in Figure 3, the living body detection method may specifically include the following steps:
  • Step S310 Obtain a first target image captured by the first sensor for the face to be recognized, and acquire a second target image captured by the second sensor for the face to be recognized.
  • the user's face image is usually collected in real time, and then the face image is recognized, and the user's identity is verified according to the face features in the face image.
  • face detection it is necessary to use face detection to determine whether the user in the current face image is a real person, so as to prevent others from posing as real people through photos, face masks, etc. as a user.
  • live face detection by detecting the face image, it can be identified whether the face image is collected from a real person (the corresponding detection result is a live body), or it is obtained from a prosthetic face (the corresponding detection result is Prosthesis).
  • the detection result is a living body
  • other processing procedures can be continued through the living body detection, for example, identity verification of the user can be performed.
  • the face to be recognized in the liveness detection can be the recognition object during face recognition, such as a face that is close to the image acquisition device for recognition in application scenarios such as security or face payment.
  • the recognition object may be a real user's face, or a forged prosthetic face.
  • the prosthetic human face may be a photo of a human face, a face mask or a printed paper human face, and the like.
  • the prosthetic human face may also be a virtual human face, such as an avatar generated based on a real human face.
  • the first sensor and the second sensor may be used to collect a face image of a face to be recognized.
  • the first sensor 430 and the second sensor 440 may be separated by a relatively short distance.
  • a first target image 450 captured by the first sensor 430 and a second target image 460 captured by the second sensor 440 can be obtained. That is to say, the first target image 450 and the second target image 460 are face images collected at different positions for the same face to be recognized.
  • the first target image and the second target image may have the same image size.
  • the first sensor 430 and the second sensor 440 may be arranged directly in front of the face to be recognized during image collection.
  • both the first sensor and the second sensor are located at the same level as the center point of the eyes of the face to be recognized.
  • face distance For example, the distance between the first sensor and the center point of the eyes of the face to be recognized and the distance between the second sensor and the center point of the eyes of the face to be recognized may be determined.
  • both the obtained first target image and the second target image can include the human face image of the human face to be recognized.
  • using two sensors to collect images separately can obtain more detailed image information of the face to be recognized, and then use these image information to obtain more accurate detection results during living body detection.
  • these image information can be higher-precision texture information, lighting information, etc. Using these texture information and lighting information, it is possible to detect prosthetic faces such as human face masks made of special materials.
  • these images may be transmitted to the electronic device for liveness detection.
  • both the first sensor and the second sensor may be visible light cameras, so the first target image and the second target image may be visible light images (which may be RGB images or grayscale images).
  • the first sensor and the second sensor may be separated by a relatively short distance, such as 1 decimeter.
  • the distance between the first sensor and the face to be recognized may be consistent with the distance between the second sensor and the face to be recognized.
  • the shooting angle of the face to be recognized by the first sensor may also be consistent with the shooting angle of the face to be recognized by the second sensor.
  • the first sensor and the second sensor may be disposed on the same binocular stereo sensor.
  • the first sensor may be the left eye sensor of the binocular stereo sensor
  • the second sensor may be the right eye sensor of the binocular stereo sensor.
  • Step S320 using a pre-trained depth generation network to extract target depth information from the first target image and the second target image.
  • the distance information between the two sensors and the face to be recognized can be determined by using the difference (parallax) between the first target image and the second target image respectively collected by the first sensor and the second sensor, and then The distance information can be used as the depth information of the face to be recognized.
  • the depth information of the face to be recognized can be obtained through calculation according to a stereo matching algorithm.
  • stereo matching algorithm to calculate depth information will consume more resources and time, which may lead to low detection efficiency, and cannot be applied to application scenarios that require frequent liveness detection.
  • target depth information can be extracted from the first target image and the second target image through a pre-trained depth generation network.
  • the target depth information may also represent the distance information between the first sensor and the second sensor and the face to be recognized.
  • the depth generation network can use a lightweight generator, whose algorithm complexity is lower than that of the stereo matching algorithm, and depth information can be obtained with less resources, thereby improving the efficiency of liveness detection.
  • the target depth information extracted from the first target image and the second target image to perform liveness detection on the face to be recognized, it is possible to distinguish whether the face to be recognized is a living body or a prosthesis.
  • the face image of a real person and the face image of a prosthetic face present different features in the target depth information.
  • the depth features corresponding to the living face of a real person may be determined as the living body feature
  • the depth features corresponding to the prosthetic face may be determined as the prosthesis feature.
  • Step S330 Detect the depth information of the target through the pre-trained liveness detection model, and obtain the liveness detection result of the face to be recognized.
  • the living body detection model is obtained by training depth information extracted from sample data, and the sample data includes a first sample image collected by the first sensor and a second sample image collected by the second sensor under at least two lighting environments, wherein, Both the first sample image and the second sample image include prosthetic human faces made of different materials.
  • the target depth information obtained in the preceding steps and corresponding to the face to be recognized can be input into a pre-trained liveness detection model, so as to perform liveness detection on the face to be recognized.
  • the living body detection model can output the detection result of the face to be recognized based on the target depth information. It can be understood that the detection result of the liveness detection of the face to be recognized can be either a living body or a prosthesis.
  • the detection result is living body, which means that the living body detection model confirms that the face to be recognized is a real face; the detection result is prosthetic, which means that the living body detection model confirms that the face to be recognized may not be a real face, but may be a fake Prosthetic face.
  • the living body detection model can be trained according to the depth information extracted from the sample data.
  • the sample data may be obtained by the first sensor and the second sensor jointly collecting images of the sample faces under different lighting environments. That is to say, the sample data may include a first sample image collected by the first sensor on the sample face and a second sample image collected by the second sensor on the same sample face under at least two lighting environments.
  • the different lighting environments may include lighting environments such as strong light, weak light, cloudy sunlight, etc., or may include multiple lighting environments with different color temperatures. Therefore, under different lighting environments, the first sensor and the second sensor collect images of the sample faces, and multiple sets of sample data corresponding to various lighting environments can be obtained, wherein each set of sample data includes the first sample image and the second sample image.
  • two sets of sample data can be obtained by collecting the same sample face in two environments of strong light and low light respectively, one set of sample data corresponds to the strong light environment, and the other set of sample data corresponds to the low light environment .
  • the living body detection model can be adapted to the living body detection requirements in various lighting environments. Accurate biopsy results can be obtained under all conditions.
  • the first sample image and the second sample image used during training may include prosthetic human faces made of different materials.
  • the sample faces used for capturing the sample images may include prosthetic faces made of different materials.
  • the sample face can be various prosthetic faces such as a paper photo, a paper face mask, a plastic face mask or a headgear made of resin, so the sample data can include using the first sensor and the second sensor to detect the paper face. Face images collected from various prosthetic faces such as high-quality photos, paper face masks, plastic face masks or resin headgears.
  • the first sample image and the second sample image used during training may also include the real user's face image.
  • the living body detection model may compare the target depth information with living body features corresponding to real people and/or prosthetic features corresponding to prosthetic faces, and then obtain detection results of faces to be recognized.
  • the face of a real person is three-dimensional, so the target depth information extracted from the face image of a real person is diverse; while the prosthetic face is usually smooth, so the prosthesis
  • the target depth information extracted from the face image is usually relatively simple. Therefore, if the target depth information extracted from the first target image and the second target image of the face to be recognized is relatively diverse, the face to be recognized can be determined as a living body (ie, a real face).
  • the living body detection model may score the faces to be recognized based on the target depth information. Among them, in the living body detection model, the living body detection score can be calculated based on the target depth information. When the living body detection score meets the preset detection threshold, it can be determined that the face to be recognized is a living body. When the living body detection score meets the preset prosthetic threshold , it can be determined that the face to be recognized is a prosthesis.
  • the target detection score of the face to be recognized can be calculated based on the target depth information, and then the target detection score is compared with the preset detection threshold to determine the target Whether the detection score satisfies the preset condition, and if the target detection score satisfies the preset condition, the face to be recognized is determined as a living body.
  • the living body detection model can compare the target depth information with the corresponding depth features of the living body (ie living body features), and obtain the target detection score by calculating the similarity between the target depth information and the living body features. For example, the higher the similarity between the target depth information and the living body features, the higher the target detection score, and the lower the similarity between the target depth information and the living body features, the lower the target detection score.
  • the target detection score may be the probability that the living body detection model determines the face to be recognized as a living body. For example, the probability can be obtained by normalizing the similarity between the target depth information and the living body feature through a softmax model.
  • the preset detection threshold may be a detection threshold for detecting the face to be recognized as a living body.
  • the preset detection threshold may be preset, or the preset detection threshold may also be determined during the training process of the living body detection model. For example, the higher the target detection score obtained by the liveness detection model, the closer the face to be recognized is to a real face. Therefore, as an example, when the target detection score is greater than a preset detection threshold, it may be determined that the face to be recognized is a living body. As another example, when the target detection score is less than a preset detection threshold, it may be determined that the face to be recognized is a fake.
  • the target prosthesis score of the face to be recognized can be calculated based on the target depth information, and then the target prosthesis score and the predicted The prosthesis threshold is compared to determine whether the target prosthesis score satisfies the preset condition, and if the target prosthesis score satisfies the preset condition, it is determined that the face to be recognized is a prosthesis.
  • the score of the target prosthesis can be obtained by calculating the similarity between the target depth information and the corresponding depth features of the prosthesis (i.e. prosthesis features).
  • the target prosthesis score may be the probability that the liveness detection model determines the face to be recognized as a prosthesis.
  • the preset fake threshold may be a detection threshold for detecting the face to be recognized as a fake.
  • the preset dummy threshold may be preset, or the preset dummy threshold may also be determined during the training process of the living body detection model.
  • the living body detection method provided by this application can acquire the first target image acquired by the first sensor for the face to be recognized and the second target image acquired by the second sensor for the same face to be identified, and use the
  • the trained deep generation network extracts the target depth information from the first target image and the second target image, and then detects the target depth information through the pre-trained living body detection model to obtain the living body detection result of the face to be recognized.
  • the living body detection model is trained by the depth information extracted from the sample data. Based on this, the living body detection method provided by this application extracts the target depth information from the two images collected by the first sensor and the second sensor through the depth generation network, and then uses the living body detection model to detect the target depth information to obtain the living body detection result. It can greatly reduce the consumption of computing resources, shorten the computing time, effectively improve the detection efficiency, and significantly improve the real-time performance of living body detection, especially suitable for actual living body detection scenarios.
  • the sample data used in the living body detection method provided by the present application includes the first sample image collected by the first sensor and the second sample image collected by the second sensor under at least two lighting environments, and the first sample image and The second sample images all include prosthetic human faces of different materials, so the living body detection method provided by the present application can recognize prosthetic human faces of different materials under different lighting environments, so that the accuracy of living body detection is higher.
  • this embodiment provides a living body detection method on the basis of the above embodiments, which can fuse the first target image and the second target image to obtain a target fusion image, and then combine the target The fused image is input into the deep generation network, and the target fusion image is processed in the deep generation network to obtain the target depth information.
  • the target depth information extracted by the deep generative network can more accurately reflect the real features of the face to be recognized.
  • FIG. 6 shows a schematic flowchart of a living body detection method provided by another embodiment of the present application. Specifically, the following steps may be included:
  • Step S610 The first target image and the second target image are scaled down and then fused to obtain a target fused image.
  • the target depth information is directly generated from the first target image and the second target image through the depth generation network, the size of the target depth information may become larger, causing the target depth information to be distorted and unable to accurately reflect the face to be recognized. real features. Therefore, in the embodiment of the present application, the first target image and the second target image can be scaled down and then fused to obtain a target fusion image, and then the target fusion image can be input into the deep generation network.
  • the first target image and the second target image can be scaled down at the same time by downsampling, and then image fusion is performed on the two scaled down images to obtain the target fusion image .
  • the image sizes of the first target image and the second target image may be the same.
  • both have a resolution of 112 ⁇ 112 ⁇ 3.
  • two feature maps can be generated, and then the two feature maps can be fused to obtain a target fused image with a resolution of, for example, 28 ⁇ 28 ⁇ 48.
  • the first target image XL and the second target image XR can be input to the F-CDCN network shallow feature extractor including FeatherNet and the central difference convolution module for processing, and two images generated after downsampling are obtained
  • the feature maps g(XL) and g(XR) and then use the feature stacking method to perform image fusion on the two feature maps g(XL) and g(XR) to obtain the target fusion image.
  • the skeleton of the network is built with the structure of the lightweight network FeatherNetV2, and all the convolutions in the network are replaced by the central difference convolution.
  • the processing method of the central difference convolution can be shown in FIG. 8 .
  • the central difference convolution can be expressed as:
  • y(.) is the output feature map
  • x(.) is the input feature map
  • P0 represents the current position of the input feature map and output feature map
  • Pn represents the position of the local receptive field R
  • is ⁇ [0, 1], which can be used to measure the weight of different semantic information.
  • Step S620 Input the target fusion image into the depth generation network, and process the target fusion image in the depth generation network to obtain target depth information.
  • the target fusion image can be input into the depth generation network to generate target depth information.
  • the deep generative network can be a pre-trained lightweight generator.
  • the algorithmic complexity of the deep generation network may be less than that of the stereo matching algorithm.
  • the target fusion image can be bilinearly upsampled, and then processed by the Sigmoid activation function to finally generate the target depth information.
  • the target depth information can be expressed as
  • the generated target depth information may be presented in the form of a pseudo depth map, and the resolution of the pseudo depth map may be 56 ⁇ 56 ⁇ 1.
  • the process of reducing the first target image and the second target image in equal proportions, merging them, and using the deep generation network to obtain the target depth information can be uniformly processed in the NNB network. That is to say, in the embodiment of the present application, the target depth information can be obtained by directly inputting the first target image and the second target image into the NNB network.
  • Such a modular processing method can make the image processing process in liveness detection more concise.
  • the target fusion image is obtained by reducing the first target image and the second target image in equal proportions, and then inputting the target fusion image into the deep generation network, and processing the target fusion image in the deep generation network to obtain the target Depth information, so that the target depth information obtained by processing the two images collected by the first sensor and the second sensor by using the depth generation network can more accurately reflect the real characteristics of the face to be recognized, and then can make the life detection The test results are more real and reliable.
  • At least two The sample data collected in various lighting environments are used to train the deep generation network and the live detection model.
  • FIG. 9 shows a schematic flowchart of a training method for a living body detection system provided by an embodiment of the present application.
  • the liveness detection system can include a deep generative network and a liveness detection model.
  • the training method may include the following steps:
  • Step S910 Obtain a first sample image obtained by capturing the sample face by the first sensor under at least two lighting environments and a second sample image obtained by capturing the sample face by the second sensor.
  • sample faces include prosthetic faces of different materials.
  • sample data for training can be collected in advance.
  • the first sensor and the second sensor can be used to collect the same sample face under different lighting environments, so as to obtain the first sample image and the sample face collected by the first sensor.
  • the second sample image acquired by the second sensor from the sample face is used as sample data for training.
  • different lighting environments may include two or more lighting environments such as strong light, weak light, and shade or sunlight, or multiple lighting environments with different color temperatures, which is not limited in this embodiment of the present application.
  • the sample data may include a plurality of first sample images and a plurality of second sample images obtained by the first sensor and the second sensor collecting images of the sample faces under different lighting environments.
  • images of the sample face x1 can be collected under at least two lighting environments such as strong light, low light, and cloudy sunlight to obtain multiple sets of sample data.
  • Each group of sample face x1 The sample data may correspond to a lighting environment.
  • the first sample data corresponding to the strong light environment, the second sample data corresponding to the low light environment, the third sample data corresponding to the cloudy and sunny environment, etc. may be collected.
  • the first sample data may include the first sample image collected from the sample face x1 under strong light and the second sample image of the sample face x1
  • the second sample data may include the first sample image collected from the sample face x1 under low light and the second sample image of the sample face x1
  • at least another set of sample data such as the third sample data can also be obtained.
  • the sample data may include images of multiple sample faces.
  • the sample faces may include prosthetic faces of different materials, and may also include faces of multiple real users.
  • the sample data can be made more diverse, so that the trained liveness detection model can detect various human faces in different lighting environments.
  • Step S920 Input the first sample image and the second sample image into the initial generation network to train the initial generation network to obtain a deep generation network.
  • the pre-built initial generation network may be trained by using the first sample image and the second sample image to obtain a deep generation network.
  • the first sample image and the second sample image can be reduced in proportion and then fused to obtain a sample fused image, and then the sample fused image is input into the initial generation network for training to obtain a deep generated image. network. Similar to the process of obtaining the target fusion image in the previous embodiment, the first sample image and the second sample image can be reduced in proportion by down-sampling, and then image fusion is performed on the two images after proportional reduction , to obtain the sample fusion image.
  • the first sample image and the second sample image may be input into an image processing unit including a FeatherNet and a central difference convolution module for processing to obtain two feature maps generated after downsampling.
  • an image processing unit including a FeatherNet and a central difference convolution module for processing to obtain two feature maps generated after downsampling.
  • the specific training process of inputting the first sample image and the second sample image into the initial generation network to train the initial generation network to obtain the depth generation network may include the following steps:
  • Step S1010 Based on the first sample image and the second sample image, use a stereo matching algorithm to calculate initial depth information.
  • the initial depth information calculated by the stereo matching algorithm may be used as supervision information to train the depth generation network.
  • the distance between the first sensor and the sample face is the same as the distance between the second sensor and the sample face.
  • the shooting angle between the first sensor and the sample face may also be consistent with the shooting angle between the second sensor and the sample face. Therefore, in the stereo matching algorithm, the initial depth information of the sample face can be calculated according to the intrinsic parameters of the first sensor and the second sensor and the parallax between the first sensor and the second sensor, wherein the initial depth information can represent the first The linear distance between the first sensor and the second sensor and the face of the sample.
  • the initial depth information may include distance information from the baseline midpoint of the first sensor and the second sensor to each spatial point on the face of the sample.
  • O1 is the first sensor
  • Or is the first sensor
  • B is the baseline
  • f is the focal length
  • P is the position of the sample face to be tested (for example, it can be a spatial point on the sample face )
  • P can be called the target point
  • D is the straight-line distance from the target point P to the first sensor and the second sensor (for example, it can be the midpoint of the baseline between the two sensors)
  • xl, xr are the target point P at
  • xol and xor are respectively the intersection points of the optical axes of the first sensor and the second sensor and the two imaging planes
  • xol and xor may be called principal points of the image. If the baselines of the first sensor and the second sensor are unified, it can be obtained by the principle of similar triangle:
  • the initial depth information can be obtained:
  • each set of sample data (including the first sample image and the second sample image) and corresponding initial depth information can form a piece of training data. Furthermore, all training data can be combined into a training set. By inputting the training set into the initial generation network and the neural network model for training, a deep generation network and a living body detection model can be obtained.
  • the training set can be expressed, for example, as:
  • x l is the first sample image collected by the first sensor
  • x r is the second sample image collected by the second sensor.
  • the resolution of both can be 112 ⁇ 112 ⁇ 3, and the two are used as the input of the network.
  • b is the initial depth information obtained through the stereo matching algorithm, and the resolution of the initial depth information can be set to 56 ⁇ 56 ⁇ 1.
  • b can be used as a depth label.
  • y is the classification label of two categories of true and false, which is used to indicate whether the sample face included in the corresponding training data is a living body or a fake (for example, the classification label of "true” can be "1", indicating a living body; The classification label for "fake” can be "0", indicating a fake).
  • n is the number of training data in the training set.
  • the first sensor and the second sensor may both belong to one binocular stereo sensor.
  • the first sensor may be the left eye sensor of the binocular stereo sensor
  • the second sensor may be the right eye sensor of the binocular stereo sensor.
  • Step S1020 Input the first sample image and the second sample image into the initial generation network, and use the initial depth information as supervision information to train the initial generation network to obtain a deep generation network, so that the depth generation network from the first The difference between the depth information extracted from the image and the second sample image and the initial depth information satisfies a preset difference condition.
  • the initial generation network may be trained using the first sample image and the second sample image in each piece of training data in the training set.
  • the initial depth information can be used as supervision information to train the initial generation network.
  • a loss function may be constructed in the initial generation network to represent the difference between the depth information extracted by the depth generation network from the first sample image and the second sample image and the initial depth information.
  • two sets of loss functions can be constructed for the initial generation network, and one set of loss functions is to represent the depth information
  • the cross-entropy of the difference with the initial depth information Bi i.e., the depth label b in the training set
  • Another set of loss functions is the relative depth loss, defined as shown in L2 NNB .
  • i is used to represent the serial number of each training data in the training set.
  • K contrast is a set of convolution kernels, for example, it can be defined as:
  • the depth information obtained by the deep generation network can more accurately reflect the real characteristics of the sample face, so that when performing face liveness detection, accurate and accurate results can be obtained. Reliable test results.
  • the method of extracting the target depth information through the depth generation network is used instead of the method of calculating the initial depth information through the stereo matching algorithm, which can greatly reduce the consumption of computing resources, shorten the computing time, and effectively improve the detection efficiency. Improve the real-time performance of liveness detection.
  • Step S930 Using the depth generation network to extract the depth information of the sample face from the first sample image and the second sample image.
  • the training process of the deep generation network and the training process of the living body detection model can be performed simultaneously, that is, the training of the deep generation network and the training of the living body detection network can be performed synchronously by using the training data in the training set.
  • the initial generation network at this time can be used to extract depth information from the first sample image and the second sample image, and input the depth information to into the neural network model to train the neural network model. That is to say, the training iteration process of the initial generation network and the training iteration process of the neural network model can be nested together, and they tend to converge together. It should be understood that the initial generation network at this time may not have reached the training target, so the depth information may not be the best depth information, and there is still a large difference from the initial depth information. At this point, the depth information generated during each adjustment process can be input into the pre-built neural network model for training. After the two training iterations are completed, the current initial generation network and neural network model are determined to be the deep generation network and the live detection model.
  • Step S940 Input the depth information of the sample face into the neural network model to train the neural network model to obtain a living body detection model.
  • the depth information can be detected in the neural network model to obtain the living body detection result of the sample face.
  • the detection result can be compared with the target label that is pre-marked on the sample face, and the neural network model can be trained by comparing the result, so that the detection difference between the detection result obtained by the living body detection model and the target label can meet the preset Detection conditions.
  • the specific training process of inputting the depth information into the neural network model for training to obtain the living body detection model may include the following steps:
  • Step S1210 Input the depth information of the sample face into the neural network model to obtain the liveness detection score of the sample face.
  • the living body detection score is the probability that the neural network model determines the classification label of the sample face as the pre-marked target label.
  • the sample faces can be scored based on depth information.
  • the liveness detection score of the sample face can be obtained. Since each sample face can be labeled with a classification label in the training set, during the training process, the liveness detection score can be determined as a pre-marked classification label (i.e. target label) according to the depth information. The probability.
  • the liveness detection score can be obtained by calculating the similarity between the depth information and the depth features represented by the pre-marked classification labels.
  • the sample The liveness detection score of the face is obtained from the neural network model, which is the probability that the sample face is determined to be alive.
  • the living body detection score can be obtained by normalizing the similarity between the depth information and the living body features, for example, through a softmax model.
  • Step S1220 Determine the detection error based on the living body detection score.
  • the classification label obtained by binary classification of the sample face by the neural network model can be obtained based on the liveness detection score output, and further, the Detection error of class labels versus pre-annotated target labels.
  • Step S1230 Adjusting the neural network model based on the detection error to obtain a living body detection model, so that the detection error of the living body detection model satisfies a preset error condition.
  • a living body detection score corresponding to a detection error satisfying a preset error condition may also be determined as a preset detection threshold.
  • a loss function for liveness detection can be constructed based on the classification labels output by the neural network model and the pre-marked target labels.
  • FocalLoss can be used to define the loss function L NNC of liveness detection.
  • L NNC can be expressed as
  • a t is a custom parameter, is the classification label output by the neural network model for the sample face contained in the i-th sample data, and Y i is the target label in the i-th sample data.
  • the neural network model can be used to detect the liveness of the training set to obtain the detection error.
  • the loss function of the living body detection can finally be obtained so that the loss function of the living body detection satisfies the preset condition (for example, the loss function specified by the loss function A neural network model whose detection error is less than a preset error threshold), the neural network model at this time can be used as a living body detection model.
  • the preset condition for example, the loss function specified by the loss function
  • the neural network model at this time can be used as a living body detection model.
  • a living body detection score corresponding to a detection error satisfying a preset condition may also be determined as a preset detection threshold.
  • the classification labels obtained through the output of the neural network model can be expressed as Among them, NNC(.) can represent the output of the neural network model.
  • the pre-built neural network model can be an NNC network composed of a bottleneck layer, a downsampling layer, a flow module, and a fully connected layer.
  • the depth information may be presented in the form of a pseudo-depth map
  • the downsampling layer may sample the resolution of the pseudo-depth map to a size of 7 ⁇ 7.
  • a fully connected layer can consist of 2 neurons and a softmax activation function. Then, the NNC network is trained through the pseudo-depth map.
  • the NNC network After the pseudo-depth map is input into the NNC network, the NNC network will perform binary classification on the sample face according to the pseudo-depth map, calculate the scores for the living label and the fake label, and then use the loss function L NNC of living detection to calculate the predicted classification label and The error between the real classification labels, optimize the NNC network by adjusting the model parameters of each layer in the NNC network, reduce the classification error, thereby improving the accuracy of liveness detection.
  • all convolutions in the neural network model can also be replaced by central difference convolution.
  • the training process of the deep generation network and the training process of the liveness detection model can be carried out simultaneously, that is to say, the objective function can be used uniformly to represent the difference between the depth information output by the initial generation network and the initial depth information and the neural Detection error of the network model.
  • the objective function can finally be obtained so that the objective function satisfies the preset condition ( For example, a deep generative network and a liveness detection network that are less than the target threshold).
  • the preset condition For example, a deep generative network and a liveness detection network that are less than the target threshold.
  • the detection error can be obtained by continuously adjusting the model parameters of the neural network model to meet the preset Liveness detection model for error conditions.
  • the preset detection threshold of the liveness detection model can also be obtained. Therefore, when using the liveness detection model to perform liveness detection on the face to be recognized, once it is determined that the target detection score meets the preset conditions (such as greater than the detection threshold), it can be determined that the face to be recognized is a living body, and at the same time, the detection error can be constrained within a reasonable range, so as to achieve the purpose of improving the accuracy of living body detection.
  • FIG. 13 shows a schematic flowchart of a model training process in a living body detection method provided by an embodiment of the present application.
  • the first sample image (XL) and the second sample image (XR) collected by the first sensor and the second sensor on the sample face can be respectively input into the F with the same structure.
  • the feature extractor can combine the functions of FeatherNet and the central difference convolution module. After downsampling and other processing, two feature maps g(XL) and g(XR) can be obtained.
  • the image fusion of g(XL) and g(XR) can be performed by means of feature stacking to obtain the sample fusion image Z.
  • the sample fused image Z can be input to the lightweight generator.
  • the depth information can be generated by bilinear upsampling and Sigmoid activation function under the supervision of the initial depth information.
  • the initial depth information may be the distance from the first sensor and the second sensor to the sample face calculated by the stereo matching algorithm for the first sample image (XL) and the second sample image (XR).
  • the depth information may be presented in the form of a pseudo depth map, and the initial depth information may be presented in the form of a depth map.
  • the resolution of the pseudo depth map is consistent with the resolution of the depth map, for example, it may be 56 ⁇ 56 ⁇ 1.
  • the depth information is input into the NNC network, and the depth information can be used in the NNC network to classify the sample faces.
  • the depth information in order to control the output error of the lightweight generator and the NNC network, two sets of loss functions L1 NNB and L2 NNB can be constructed in the lightweight generator, and a set of loss functions L NNC can be constructed in the NNC network.
  • the four algorithm modules of feature extractor, feature fusion, lightweight generator and NNC network are trained by using face images collected by the first sensor and the second sensor under different lighting environments, At the same time, using the depth information obtained by the stereo matching algorithm as supervision, it can solve the problems of occlusion area, color style, and eyebrow ghosting in the process of replacing the real face with the prosthetic face.
  • the first sensor and the second sensor may jointly form a binocular stereo sensor.
  • FIG. 14 shows a module block diagram of a living body detection device provided by an embodiment of the present application.
  • the device may include: an image acquisition module 1410 , a depth generation module 1420 and a living body detection module 1430 .
  • the image acquisition module 1410 is used to acquire the first target image acquired by the first sensor for the face to be recognized, and the second target image obtained by the second sensor for the face to be identified;
  • the depth generation module 1420 It is used to extract the target depth information from the first target image and the second target image by using the pre-trained depth generation network;
  • the living body detection module 1430 is used to detect the target depth information through the pre-trained living body detection model , to obtain the liveness detection result of the face to be recognized.
  • the living body detection model can be trained by depth information extracted from sample data.
  • the sample data may include a first sample image captured by the first sensor and a second sample image captured by the second sensor under at least two lighting environments.
  • both the first sample image and the second sample image include prosthetic human faces made of different materials.
  • the aforementioned living body detection module 1430 may include: a score calculation module, configured to input the target depth information into the living body detection model to obtain the target detection score of the face to be recognized being detected as a living body; the score judgment module, It is used to determine the face to be recognized as a living body when the target detection score meets the preset detection threshold.
  • the above-mentioned depth generation module 1420 may include: an image fusion module, configured to fuse the first target image and the second target image to obtain a target fusion image; a depth generation submodule, configured to combine the The target fusion image is input to the deep generation network, and the target fusion image is processed in the deep generation network to obtain the target depth information.
  • the above-mentioned image fusion module may also be used to scale down the first target image and the second target image, and fuse the two scaled down images.
  • FIG. 15 shows a block diagram of a training device of a living body detection system provided by an embodiment of the present application.
  • the liveness detection system can include a deep generative network and a liveness detection model.
  • the device may include: a sample acquisition module 1510 , a network training module 1520 , a depth extraction module 1530 and a model training module 1540 .
  • the sample acquisition module 1510 is configured to acquire the first sample image obtained by the first sensor collecting the sample face under at least two lighting environments and the second sample image obtained by the second sensor collecting the sample face.
  • Sample images wherein the sample faces include prosthetic faces of different materials
  • the network training module 1520 is used to input the first sample image and the second sample image into the initial generation network to train the initial generation network to obtain the depth Generate a network
  • the depth extraction module 1530 is used to use the depth generation network to extract the depth information of the sample face from the first sample image and the second sample image
  • the model training module 1540 is used to input the depth information into the pre-built neural network
  • the neural network model is trained to obtain a living body detection model.
  • the above-mentioned network training module 1520 may include: a stereo matching module, used to calculate initial depth information using a stereo matching algorithm according to the first sample image and the second sample image; a supervision module, using Input the first sample image and the second sample image into the initial generation network, and use the initial depth information as supervision information to train the initial generation network to obtain a deep generation network, so that the first sample image can be obtained through the deep generation network and the difference between the depth information extracted from the second sample image and the initial depth information satisfies a preset difference condition.
  • the network training module 1520 may further include: a sample fusion module, configured to fuse the first sample image and the second sample image to obtain a sample fusion image; the network training submodule , which is used to input the sample fusion image into the initial generation network for training to obtain a deep generation network.
  • a sample fusion module configured to fuse the first sample image and the second sample image to obtain a sample fusion image
  • the network training submodule which is used to input the sample fusion image into the initial generation network for training to obtain a deep generation network.
  • the above sample fusion module may also be used to scale down the first sample image and the second sample image, and fuse the two scaled down images.
  • the above-mentioned model training module 1540 may include: a sample score calculation module, configured to input depth information into the neural network model to obtain a liveness detection score of a sample human face, wherein the liveness detection score is the sample human face The classification label of the face is determined as the probability of the pre-marked target label; the error determination module is used to determine the detection error based on the live body detection score, and adjust the neural network model based on the detection error to obtain the living body detection model, so that the living body The detection error of the detection model satisfies a preset error condition.
  • model training module 1540 may further include: a model training sub-module, configured to determine a living body detection score corresponding to a detection error satisfying a preset error condition as a preset detection threshold.
  • the first sensor and the second sensor together form a binocular stereo sensor.
  • the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or modules may be electrical, mechanical or otherwise.
  • each functional module in each embodiment of the present application may be integrated into one processing module, each module may exist separately physically, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules.
  • FIG. 16 shows a structural block diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device in this embodiment may include one or more of the following components: a processor 1610, a memory 1620, and one or more application programs, wherein one or more application programs may be stored in the memory 1620 and configured as Executed by one or more processors 1610, one or more application programs are configured to execute the methods described in the foregoing method embodiments.
  • the electronic device may be any of various types of computer system devices that are mobile, portable, and perform wireless communication.
  • the electronic device can be a mobile phone or a smart phone (for example, based on iPhone TM, a phone based on Android TM), a portable game device (such as Nintendo DS TM, PlayStation Portable TM, Gameboy Advance TM, iPhone TM), a laptop Computers, PDAs, portable Internet devices, music players and data storage devices, other handheld devices and such as smart watches, smart bracelets, earphones, pendants, etc.
  • electronic devices can also be other wearable devices (for example, such as electronic glasses, e-clothes, e-bracelets, e-necklaces, e-tattoos, electronic devices or head-mounted devices (HMD)).
  • HMD head-mounted devices
  • the electronic device may also be any of a number of electronic devices including, but not limited to, cellular phones, smart phones, smart watches, smart bracelets, other wireless communication devices, personal digital assistants, audio players, other media Players, Music Recorders, Video Recorders, Cameras, Other Media Recorders, Radios, Medical Equipment, Vehicle Transportation Instruments, Calculators, Programmable Remote Controls, Pagers, Laptop Computers, Desktop Computers, Printers, Netbook Computers, Personal Digital Assistants (PDAs), Portable Multimedia Players (PMPs), Moving Picture Experts Group (MPEG-1 or MPEG-2) Audio Layer 3 (MP3) players, Portable Medical Devices, and Digital Cameras and combinations thereof.
  • PDAs Personal Digital Assistants
  • PMPs Portable Multimedia Players
  • MPEG-1 or MPEG-2 Moving Picture Experts Group Audio Layer 3
  • an electronic device can perform multiple functions (eg, play music, display video, store pictures, and receive and send phone calls).
  • the electronic device may be a device such as a cellular phone, media player, other handheld device, wrist watch device, pendant device, earpiece device, or other compact portable device.
  • the electronic device can also be a server, for example, it can be an independent physical server, or it can be a server cluster or a distributed system composed of multiple physical servers, and it can also be a server that provides cloud services, cloud databases, cloud computing, and cloud functions. , cloud storage, network services, cloud communications, middleware services, domain name services, security services, CDN (Content Delivery Network, content distribution network), and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms. Provide specialized or platform servers for face recognition, automatic driving, industrial Internet services, data communications (such as 4G, 5G, etc.).
  • Processor 1610 may include one or more processing cores.
  • the processor 1610 uses various interfaces and lines to connect various parts of the entire electronic device, and executes by running or executing instructions, application programs, code sets or instruction sets stored in the memory 1620, and calling data stored in the memory 1620.
  • the processor 1610 may adopt at least one of Digital Signal Processing (Digital Signal Processing, DSP), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and Programmable Logic Array (Programmable Logic Array, PLA). implemented in the form of hardware.
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PLA Programmable Logic Array
  • the processor 1610 may integrate one or a combination of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a modem, and the like.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the CPU mainly handles the operating system, user interface and application programs, etc.
  • the GPU is used to render and draw the displayed content
  • the modem is used to handle wireless communication. It can be understood that the above modem may also not be integrated into the processor 1610, but implemented by a communication chip alone.
  • the memory 1620 may include random access memory (Random Access Memory, RAM), and may also include read-only memory (Read-Only Memory). Memory 1620 may be used to store instructions, applications, codes, sets of codes, or sets of instructions.
  • the memory 1620 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system and instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.) , instructions for implementing the following method embodiments, and the like.
  • the data storage area may also be data created by the electronic device during use (such as phone book, audio and video data, chat record data) and the like.
  • FIG. 17 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • Program codes are stored in the computer-readable storage medium 1700, and the program codes can be invoked by a processor to execute the methods described in the foregoing method embodiments.
  • the computer readable storage medium 1700 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the computer-readable storage medium 1700 includes a non-transitory computer-readable storage medium (non-transitory computer-readable storage medium).
  • the computer-readable storage medium 1700 has a storage space for program code 1710 for executing any method steps in the above methods. These program codes can be read from or written into one or more computer program products.
  • Program code 1710 may, for example, be compressed in a suitable form.
  • the computer-readable storage medium 1700 may be, for example, a read-only memory (Read-Only Memory, ROM for short), a random access memory (Random Access Memory, RAM for short), SSD, an electrically erasable programmable read-only memory (Electrically Erasable Programmable read only memory, referred to as EEPROM) or flash memory (Flash Memory, referred to as Flash), etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • SSD an electrically erasable programmable read-only memory
  • Flash memory Flash Memory
  • a computer program product or computer program comprising computer instructions stored on a computer readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the steps in the foregoing method embodiments.
  • the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is more best implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, SSD, Flash ) includes several instructions to make a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method of each embodiment of the present application.
  • a terminal which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.
  • the present application provides a living body detection method, device, electronic equipment and storage medium. Specifically, the present application can obtain the first target image collected by the first sensor for the face to be recognized and the second target image collected by the second sensor for the same face to be recognized, and use the pre-trained deep generation network to generate The target depth information is extracted from the target image and the second target image, and then the target depth information is detected by a pre-trained liveness detection model, so as to obtain the liveness detection result of the face to be recognized.
  • the living body detection model is obtained by training depth information extracted from sample data, and the sample data includes, under at least two lighting environments, a first sample image collected by the first sensor and a second sample image collected by the second sensor, And both the first sample image and the second sample image include prosthetic human faces made of different materials.
  • this application extracts the target depth information from the two images collected by the first sensor and the second sensor through the depth generation network, and then uses the living body detection model to detect the target depth information to obtain the living body detection result, which can greatly reduce the cost of computing resources. Consumption, shorten the calculation time, effectively improve the detection efficiency, significantly improve the real-time performance of live detection, especially suitable for actual live detection scenarios.
  • the living body detection method provided by the present application can recognize prosthetic human faces made of different materials under different lighting environments, so that the accuracy of living body detection is higher.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开一种活体检测方法及装置、活体检测系统的训练方法及装置、电子设备及存储介质。该活体检测方法通过获取第一传感器和第二传感器对同一待识别人脸分别采集得到的两张目标图像,用深度生成网络从两个目标图像中提取目标深度信息并用活体检测模型对目标深度信息进行检测得到待识别人脸的活体检测结果。活体检测模型由样本数据的深度信息训练得到,样本数据包括至少两种光照下第一传感器采集的第一样本图像和第二传感器采集的第二样本图像,两个样本图像中均包括不同材质的假体人脸。通过深度生成网络对两个图像提取目标深度信息,用活体检测模型对目标深度信息进行活体检测,可识别不同光照下不同材质的假体人脸,使识别准确率更高。

Description

活体检测方法及装置、活体检测系统的训练方法及装置 技术领域
本申请涉及人脸活体检测技术领域,尤其涉及一种活体检测方法及装置、活体检测系统的训练方法及装置,以及电子设备和存储介质。
发明背景
随着人脸识别技术的发展,人脸活体检测技术成为人脸识别技术中的关键步骤。但是,经过人脸活体检测得到的检测结果还不够准确,存在将假体人脸识别为活体人脸的风险。
发明内容
鉴于上述问题,本申请提出了一种活体检测方法、装置、电子设备及存储介质,能解决上述问题。
第一方面,本申请实施例提供了一种活体检测方法,包括:获取第一传感器对待识别人脸采集而得到的第一目标图像,获取第二传感器对该待识别人脸采集而得到的第二目标图像;利用预先训练的深度生成网络,从第一目标图像及第二目标图像中提取目标深度信息;通过预先训练的活体检测模型对目标深度信息进行检测,得到待识别人脸的活体检测结果,其中,活体检测模型由从样本数据提取的深度信息训练而得到,样本数据包括在至少两种光照环境下,第一传感器采集的第一样本图像和第二传感器采集的第二样本图像,其中,第一样本图像和第二样本图像均包括不同材质的假体人脸。
第二方面,本申请实施例提供了一种活体检测系统的训练方法,活体检测系统包括深度生成网络和活体检测模型,该训练方法包括:获取在至少两种光照环境下第一传感器对样本人脸采集而得到的第一样本图像以及第二传感器对该样本人脸采集而得到的第二样本图像,其中,样本人脸包括不同材质的假体人脸;将第一样本图像及第二样本图像输入初始生成网络中以对初始生成网络进行训练,得到深度生成网络;利用深度生成网络,从第一样本图像及第二样本图像中提取样本人脸的深度信息;将样本人脸的深度信息输入神经网络模型中以对神经网络模型进行训练,得到活体检测模型。
第三方面,本申请实施例提供了一种活体检测装置,所述装置包括:图像获取模块、深度生成模块以及活体检测模块。其中,图像获取模块用于获取第一传感器对待识别人脸采集而得到的第一目标图像,获取第二传感器对该待识别人脸采集而得到的第二目标图像;深度生成模块用于利用预先训练的深度生成网络,从第一目标图像及第二目标图像中提取目标深度信息;活体检测模块用于通过预先训练的活体检测模型对目标深度信息进行检测,得到所述待识别人脸的活体检测结果,其中,活体检测模型由从样本数据提取的深度信息训练而得到,样本数据包括在至少两种光照环境下,第一传感器采集的第一样本图像和第二传感器采集的第二样本图像,其中,第一样本图像和第二样本图像均包括不同材质的假体人脸。
第四方面,本申请实施例提供了一种活体检测系统的训练装置,活体检测系统包括深度生成网络和活体检测模型,该训练装置包括:样本获取模块,用于获取在至少两种光照环境下第一传感器对样本人脸采集而得到的第一样本图像以及第二传感器对该样本人脸采集而得到的第二样本图像,其中,样本人脸包括不同材质的假体人脸;网络训练模块,用于将第一样本图像及第二样本图像输入初始生成网络中以对初始生成网络进行训练,得到深度生成网络;深度提取模块,用于利用深度生成网络,从第一样本图像及第二样本图像中提取样本人脸的深度信息;模型训练模块,用于将样本人脸的深度信息输入神经网络模型中以对神经网络模型进行训练,得到活体检测模型。
第五方面,本申请实施例提供了一种电子设备,包括:一个或多个处理器;存储器;一个或多个应用程序,其中,一个或多个应用程序被存储在存储器中并被配置为由一个或多个处理器执行,一个或多个应用程序配置用于执行上述活体检测方法或者活体检测系统的训练方法。
第六方面,本申请实施例提供了一种计算机可读存储介质,计算机可读存储介质中存储有程序代码,程序代码可被处理器调用执行上述活体检测方法或者活体检测系统的训练方法。
第七方面,本申请实施例提供了一种包含指令的计算机程序产品,其特征在于,计算机程序产品中存储有指令,当其在计算机上运行时,使得计算机实现上述活体检测方法或者活体检测系统的训练方法。
本申请可以获取两个传感器对同一待识别人脸分别采集得到的两张目标图像,利用预先训练的深度生成网络,基于该两张目标图像提取出目标深度信息(即待识别人脸的深度信息),进而利用预先训练的活体检测模型根据目标深度信息进行检测,得到待识别人脸的活体检测结果。其中,活体检测模型由样本数据提取的深度信息训练而得到,样本数据包括,在至少两种光照环境下,第一传感器采集的第一样本图像和第二传感器采集的第二样本图像,并且第一样本图像和第二样本图像均包括不同材质的假体人脸。也就是说,本申请的技术方案通过利用神经网络,能够根据同一待识别人脸的两张图像迅速获取该人脸的深度信息并根据该深度信息确定该待识别人脸的活体检测结果,进而实现高效且高准确率的活体检测。同时,本申请可以识别出不同光照环境下的假体人脸,从而使活体检测的准确率更高。
本申请的这些方面或其他方面在以下实施例的描述中会更加简明易懂。
附图简要说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了本申请一实施例提供的活体检测方法的一种应用环境示意图;
图2示出了申申请一实施例提供的活体检测方法的一种应用场景示意图;
图3示出了本申请一实施例提供的活体检测方法的流程示意图;
图4示出了本申请一实施例提供的第一传感器及第二传感器的成像示意图;
图5示出了本申请一实施例提供一种活体检测系统的处理流程示意图;
图6示出了本申请又一实施例提供的活体检测方法的流程示意图;
图7示出了本申请一实施例提供一种目标深度信息的提取过程的示意图;
图8示出了本申请一实施例提供一种中心卷积的处理过程的示意图;
图9示出了本申请一实施例提供的活体检测系统的训练方法的流程示意图;
图10示出了本申请一实施例提供的活体检测系统中深度生成网络的训练过程的流程示意图;
图11示出了本申请一实施例提供的立体匹配算法的示意图;
图12示出了本申请一实施例提供的活体检测系统中活体检测模型的训练过程的流程示意图;
图13示出了本申请一实施例提供的活体检测系统的训练装置的处理流程示意图;
图14示出了本申请一实施例提供的活体检测装置的模块框图;
图15示出了本申请一实施例提供的活体检测系统的训练装置的模块框图;
图16示出了本申请一实施例提供的电子设备的结构框图;
图17示出了本申请一实施例提供的计算机可读存储介质的结构框图。
实施本发明的方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
随着互联网产业的高速发展,近年来以机器学习与深度学习为标志性技术的人工智能技术在视频图像、语音识别、自然语音处理等相关领域得到了广泛应用,尤其在人脸识别中的应用越加广泛。人脸识别在人工智能和大数据驱动下,展现出了巨大的发展潜力,其应用场景不断拓展,由安防等公共领域向支付及验证的商业领域逐步落地。然而,人脸识别是一把双刃剑,在技术持续演进、应用不断推广的同时,也带来了数据泄露、个人隐私遭受侵犯以及被他人冒用身份等信息安全问题。随着人脸识别的广泛应用,人脸3D模型、人脸面具等假体人脸逐渐盛行,特别是基于假体人脸的人脸识别的对抗攻击技术的出现,对普通的人脸识别系统的准确性造成了较大的冲击,使得经过人脸活体检测得到的检测结果准确性较低,存在将假体人脸识别为活体人脸的风险,用户的信息安全等无法得到保障。
为解决上述问题,本申请发明人经过仔细研究后发现,可以采用预先训练的深度生成网络对两个传感器分别采集的两个图像提取目标深度信息,然后使用预先训练得到的活体检测模型对目标深度信息进行活体检测,可以得到更加准确的检测结果,同时不会增加硬件成本。
为了更好理解本申请实施例提供的一种活体检测方法及装置、活体检测系统的训练方法及装置、电子设备及存储介质,下面先对适用于本申请实施例的应用环境进行描述。
请参阅图1,图1示出了本申请一实施例提供的活体检测方法的一种应用环境示意图。示例性地,本申请实施例提供的活体检测方法以及活体检测系统的训练方法可以应用于电子设备。可选地,电子设备例如可以是如图1中所示的服务器110,服务器110可以通过网络与图像采集设备120相连。其中,网络是用以在服务器110和图像采集设备120之间提供通信链路的介质。网络可以包括各种连接类型,例如有线通信链路、无线通信链路等等,本申请实施例对此不作限制。
在一些实施例中,图像采集设备120可以包括第一传感器和第二传感器。在进行人脸识别时,可以通过第一传感器和第二传感器分别采集用户的人脸图像,然后可以通过网络向服务器110发送采集得到的人脸图像。服务器接收到这些人脸图像后,可以通过本申请实施例所述的活体检测方法,根据这些人脸图像对用户进行活体检测。示例性地,这些人脸图像可以包括第一传感器对用户采集而得到的第一目标图像,以及第二传感器对用户采集而得到的第二目标图像。
应该理解,图1中的服务器110、网络和图像采集设备120仅仅是示意性的。根据实现需要,可以具有任意数目的服务器、网络和图像采集设备。示例性地,服务器110可以是物理服务器,也可以是由多个服务器组成的服务器集群等,图像采集设备120可以是手机、平板、相机、笔记本电脑等设备。可以理解的是,本申请的实施例还可以允许多台图像采集设备120同时接入服务器110。
可选地,在另一些实施例中,如图2所示,电子设备也可以是智能手机、平板、笔记本电脑等等。在一些实施方式中,图像采集设备120可以集成于电子设备中,例如智能手机、平板、笔记本电脑等电子设备可以搭载有两个传感器。在进行人脸识别时,电子设备可以通过这两个传感器采集用户的人脸图像,然后在本地根据采集得到的人脸图像进行活体检测。可选地,若根据采集得到的人脸图像检测出用户为活体,即活体检测通过。此时,可以继续对用户进行进一步的身份验证,还可以在电子设备的显示界面上同步显示采集得到的人脸图像以及活体检测的检测结果。
上述应用环境仅为方便理解所作的示例,可以理解的是,本申请实施例不局限于上述应用环境。
下面将通过具体实施例对本申请提供的活体检测方法及装置、活体检测系统的训练方法及装置,以及电子设备及存储介质进行详细说明。
图3示出了本申请一实施例提供的活体检测方法的流程示意图。如图3所示,该活体检测方法具体可以包括如下步骤:
步骤S310:获取第一传感器对待识别人脸采集而得到的第一目标图像,获取第二传感器对该待识别人脸采集而得到的第二目标图像。
在安防、人脸支付等应用场景下,通常会实时采集用户的人脸图像,然后对该人脸图像进行识别,根据该人脸图像中的人脸特征来验证用户的身份。通常情况下,为了保证用户信息的安全性,在进行身份验证前,需要通过人脸活体检测来确定当前的人脸图像中的用户是否为真人,以防止他人通过照片、人脸面具等方式冒用用户的身份。
在人脸活体检测中,通过对人脸图像进行检测,可以识别出该人脸图像是对真人采集得到的(对应检测结果为活体),还是对假体人脸采集得到的(对应检测结果为假体)。当检测结果为活体时,则可通过活体检测,继续执行其他的处理流程,例如可以对用户进行身份验证等。
可以理解的是,活体检测中的待识别人脸可以是人脸识别时的识别对象,例如安防或人脸支付等应用场景下靠近图像采集设备接受识别的人脸等。识别对象可能是真实的用户的人脸,也可能是被伪造的假体人脸。在一些实施例中,假体人脸可能是人脸照片、人脸面具或打印的纸质人脸等等。可选地,假体人脸也可能是虚拟人脸,例如根据真实人脸生成的虚拟形象等等。
在本申请的实施例中,可以使用第一传感器及第二传感器采集待识别人脸的人脸图像。如图4所示,第一传感器430与第二传感器440可以相隔较近的距离。通过第一传感器430及第二传感器440对待识别人脸410进行图像采集,可以得到由第一传感器430采集的第一目标图像450以及第二传感器440采集的第二目标图像460。也就是说,第一目标图像450和第二目标图像460是在不同位置上针对同一待识别人脸采集得 到的人脸图像。优选地,为了方便在后续的活体检测过程中对第一目标图像和第二目标图像进行处理,第一目标图像和第二目标图像可以具有相同的图像尺寸。
特别地,为了对待识别人脸采集得到更加标准而可靠的人脸图像,图像采集时可以将第一传感器430及第二传感器440设置在待识别人脸的正前方。例如,第一传感器和第二传感器均与待识别人脸的双眼的中心点位于同一水平高度。可选地,通过第一传感器430和第二传感器440分别采集得到的第一目标图像450和第二目标图像460之间的差异(视差),可以确定第一传感器及第二传感器与待识别人脸的距离。例如,可以确定第一传感器与待识别人脸的双眼的中心点之间的距离以及第二传感器与待识别人脸的双眼的中心点之间的距离。
通过第一传感器及第二传感器分别对同一个待识别人脸进行图像采集,可以使得到的第一目标图像及第二目标图像均包含待识别人脸的人脸图像。并且,相对于单个传感器,使用两个传感器分别进行图像采集,可以对待识别人脸采集得到更加详细的图像信息,进而在活体检测时利用这些图像信息可以得到更加准确的检测结果。例如,这些图像信息可以是具备更高精度的纹理信息、光照信息等,利用这些纹理信息、光照信息,能够检测出由特殊材质制成的人脸面具等假体人脸。
在一些实施方式中,第一传感器及第二传感器分别采集得到包含待识别人脸的第一目标图像及第二目标图像后,可以将这些图像传输到电子设备中进行活体检测。
可选地,第一传感器和第二传感器均可以是可见光摄像头,因此第一目标图像及第二目标图像可以为可见光图像(可以是RGB图像或者灰度图像)。
可以理解的是,在一些典型的实施例中,第一传感器与第二传感器可以相隔较近的距离,例如1分米。可选地,第一传感器与待识别人脸之间的距离可以与第二传感器与待识别人脸之间的距离一致。可选地,第一传感器对于待识别人脸的拍摄角度也可以与第二传感器对于待识别人脸的拍摄角度一致。
可选地,第一传感器与第二传感器可以设于同一双目立体传感器上。例如,第一传感器可以是该双目立体传感器的左目传感器,第二传感器可以是该双目立体传感器的右目传感器。
步骤S320:利用预先训练的深度生成网络,从第一目标图像及第二目标图像中提取目标深度信息。
在一些实施方式中,通过第一传感器、第二传感器分别采集得到的第一目标图像和第二目标图像之间的差异(视差),可以确定两个传感器与待识别人脸的距离信息,进而可以将该距离信息作为待识别人脸的深度信息。示例性地,可以根据立体匹配算法计算得到待识别人脸的深度信息。但是,采用立体匹配算法计算深度信息将消耗较多的资源和时间,可能会导致检测效率低下,无法适用于需要频繁进行活体检测的应用场景。
如图5所示,在本申请的实施例中,可以通过预先训练得到的深度生成网络从第一目标图像和第二目标图像中提取目标深度信息。目标深度信息同样可以表示第一传感器及第二传感器与待识别人脸的距离信息。可选地,深度生成网络可以采用轻量级生成器,其算法复杂度低于立体匹配算法的算法复杂度,利用较少的资源即可得到深度信息,从而能够提高活体检测的效率。
利用从第一目标图像和第二目标图像提取的目标深度信息对待识别人脸进行活体检测,可以区分该待识别人脸为活体还是假体。可以理解的是,真人的人脸图像与假体人脸的人脸图像在目标深度信息上呈现出不同的特征。在这里,可以将真人的活体人脸对应的深度特征确定为活体特征,将假体人脸对应的深度特征确定为假体特征。通过将目标深度信息与活体特征和/或假体特征进行比较,可以得出待识别人脸的活体检测结果。
步骤S330:通过预先训练的活体检测模型对目标深度信息进行检测,得到待识别人脸的活体检测结果。
其中,活体检测模型由样本数据提取的深度信息训练而得到,样本数据包括在至少两种光照环境下,第一传感器采集的第一样本图像和第二传感器采集的第二样本图像,其中,第一样本图像和第二样本图像均包括不同材质的假体人脸。
在本申请的实施例中,可以将前述步骤中得到的与待识别人脸相对应的目标深度信息输入到预先训练的活体检测模型中,以对待识别人脸进行活体检测。请参考图5,接着活体检测模型可以基于目标深度信息输出待识别人脸的检测结果。可以理解的是,对待识别人脸进行活体检测的检测结果可以是活体或假体中的任 一种。其中,检测结果为活体可以表示活体检测模型确认待识别人脸为真人的人脸;检测结果为假体可以表示活体检测模型确认待识别人脸有可能不是真人的人脸,而可能是伪装的假体人脸。
可选地,活体检测模型可以根据从样本数据提取的深度信息而训练得到。其中,样本数据可以由第一传感器及第二传感器共同在不同的光照环境下对样本人脸进行图像采集而得到。也就是说,样本数据可以包括在至少两种光照环境下由第一传感器对样本人脸采集得到的第一样本图像以及由第二传感器对同一个样本人脸采集得到的第二样本图像。
需要说明的是,不同的光照环境可以包括强光、弱光、阴阳光等光照环境,也可以包括色温不同的多种光照环境。因此,在不同的光照环境下由第一传感器及第二传感器对样本人脸进行图像采集,可以得到对应于多种光照环境的多组样本数据,其中,每组样本数据包括第一样本图像和第二样本图像。
例如,可以分别在强光、弱光两种环境下对同一个样本人脸进行采集而得到两组样本数据,其中一组样本数据对应于强光环境,另一组样本数据对应于弱光环境。这样,通过使用在不同光照环境下对样本人脸进行图像采集而得到的样本数据对活体检测模型进行训练,可以使活体检测模型适用于多种光照环境中的活体检测需求,在不同的光照条件下均可以得到准确的活体检测结果。
可选地,为了使活体检测模型能够识别出假体人脸,训练时使用的第一样本图像和第二样本图像可以包括不同材质的假体人脸。也就是说,用于样本图像拍摄的样本人脸可以包括不同材质的假体人脸。例如,样本人脸可以是纸质照片、纸质人脸面具、塑料人脸面具或树脂制成的头套等各种假体人脸,因此样本数据可以包括使用第一传感器及第二传感器对纸质照片、纸质人脸面具、塑料人脸面具或树脂制成的头套等各种假体人脸采集得到的人脸图像。
可以理解的是,为使活体检测模型可以识别出真实用户的人脸图像,训练时使用的第一样本图像和第二样本图像也可以包括真实用户的人脸图像。
在一些实施方式中,活体检测模型可以将目标深度信息与真人对应的活体特征和/或假体人脸对应的假体特征相比较,进而得到待识别人脸的检测结果。示例性地,由于五官及皮肤纹理的存在,真人的面部是立体的,因此从真人的人脸图像中提取的目标深度信息是多样的;而假体人脸通常是平滑的,因此从假体人脸的人脸图像中提取的目标深度信息通常较为单一。因此,若从待识别人脸的第一目标图像和第二目标图像中提取的目标深度信息较为多样,则可将该待识别人脸确定为活体(即真人人脸)。
在另一些实施方式中,活体检测模型可以基于目标深度信息对待识别人脸进行打分。其中,在活体检测模型中可以基于目标深度信息计算活体检测分值,当活体检测分值满足预设检测阈值时,可以确定待识别人脸为活体,当活体检测分值满足预设假体阈值时,可以确定待识别人脸为假体。
具体地,为了判断待识别人脸是否为真实用户的人脸,可以基于目标深度信息,计算待识别人脸的目标检测分值,然后将目标检测分值与预设检测阈值进行比较以判断目标检测分值是否满足预设条件,若目标检测分值满足预设条件,则将待识别人脸确定为活体。
在一些实施例中,活体检测模型可以将目标深度信息与活体对应的深度特征(即活体特征)进行比较,通过计算目标深度信息与活体特征之间的相似度,得到目标检测分值。例如,目标深度信息与活体特征之间的相似度越高,则目标检测分值越高,目标深度信息与活体特征之间的相似度越低,则目标检测分值越低。进一步地,目标检测分值可以为活体检测模型将待识别人脸确定为活体的概率。例如,可以通过softmax模型将目标深度信息与活体特征之间的相似度进行归一化而得到该概率。
预设检测阈值可以是将待识别人脸检测为活体的检测阈值。在这里,预设检测阈值可以是预先设定的,或者,预设检测阈值也可以是在活体检测模型的训练过程中确定的。示例性地,活体检测模型得到的目标检测分值越高,则表示待识别人脸越接近于真人的人脸。因此,作为一种示例,当目标检测分值大于预设检测阈值时,可以确定待识别人脸为活体。作为另一种示例,当目标检测分值小于预设检测阈值时,可以确定待识别人脸为假体。
可选地,在一些场景下,为了判断待识别人脸是否为假体人脸,可以基于目标深度信息,计算将待识别人脸的目标假体分值,然后将目标假体分值与预设假体阈值进行比较以判断目标假体分值是否满足预设条件,若目标假体分值满足预设条件,则确定待识别人脸为假体。可以理解的是,目标假体分值可以通过计算 目标深度信息与假体对应的深度特征(即假体特征)之间的相似度而得到。进一步地,目标假体分值可以为活体检测模型将待识别人脸确定为假体的概率。
预设假体阈值可以是将待识别人脸检测为假体的检测阈值。在这里,预设假体阈值可以是预先设定的,或者,预设假体阈值也可以是在活体检测模型的训练过程中确定的。
作为一种示例,活体检测模型得到的目标假体分值越高,则表示待识别人脸越接近于假体人脸。因此,当目标假体分值大于预设假体阈值时,可以确定待识别人脸为假体。
综上所述,本申请提供的活体检测方法,可以获取第一传感器对待识别人脸采集得到的第一目标图像以及第二传感器对同一个待识别人脸采集得到的第二目标图像,利用预先训练的深度生成网络从第一目标图像和第二目标图像中提取目标深度信息,接着通过预先训练的活体检测模型对目标深度信息进行检测从而得到待识别人脸的活体检测结果。其中,活体检测模型由从样本数据提取的深度信息训练得到。基于此,本申请提供的活体检测方法通过深度生成网络从第一传感器和第二传感器采集的两个图像中提取目标深度信息,再利用活体检测模型对目标深度信息进行检测从而得到活体检测结果,可以大大减少计算资源的消耗,缩短计算时间,有效提高检测效率,显著提高活体检测的实时性,尤其适用于实际的活体检测场景。
此外,本申请提供的活体检测方法中采用的样本数据包括在至少两种光照环境下第一传感器采集的第一样本图像和第二传感器采集的第二样本图像,并且第一样本图像和第二样本图像均包括不同材质的假体人脸,因此本申请提供的活体检测方法能够识别出不同光照环境下不同材质的假体人脸,从而使活体检测的准确率更高。
在一些实施方式中,可选地,本实施例在上述实施例的基础上,提供一种活体检测方法,可以将第一目标图像及第二目标图像进行融合,得到目标融合图像,然后将目标融合图像输入深度生成网络,在深度生成网络中对目标融合图像处理得到目标深度信息。这样,深度生成网络提取的目标深度信息可以更加准确地反映待识别人脸的真实特征。
请参阅图6,其示出了本申请又一实施例提供的活体检测方法的流程示意图。具体可以包括如下步骤:
步骤S610:将第一目标图像及第二目标图像等比例缩小后进行融合,得到目标融合图像。
若直接由第一目标图像及第二目标图像经过深度生成网络而生成的目标深度信息,目标深度信息的尺寸可能会变大,使得目标深度信息产生失真,无法准确地反映出待识别人脸的真实特征。因此,在本申请的实施例中,可以将第一目标图像及第二目标图像等比例缩小后再进行融合得到目标融合图像,然后将目标融合图像输入到深度生成网络中。
作为一种示例,请参考图7,可以采用下采样的方式同时对第一目标图像及第二目标图像进行等比例缩小,然后对等比例缩小后的两幅图像进行图像融合,得到目标融合图像。
示例性地,第一目标图像及第二目标图像的图像尺寸可以是相同的。例如,二者的分辨率均为112×112×3。经过对第一目标图像及第二目标图像进行下采样,可以生成两幅特征图,然后再对两幅特征图进行融合,可以得到例如分辨率为28×28×48的目标融合图像。
可选地,可以将第一目标图像XL及第二目标图像XR输入到包含FeatherNet与中心差分卷积模块的F-CDCN网络浅层特征提取器中进行处理,得到经过下采样后生成的两幅特征图g(XL)及g(XR),然后采用特征堆叠的方式对这两幅特征图g(XL)及g(XR)进行图像融合,得到的目标融合图像。在这里,目标融合图像可以表示为Z=f([g L(XL,θ L),g L(XR,θ R)];θ F),其中,f(.)表示特征堆叠的过程。
在F-CDCN网络中,网络的骨架采用轻量级网络FeatherNetV2的结构进行搭建,网络中所有的卷积均采用中心差分卷积的方式替代。中心差分卷积的处理方式可以如图8所示。
在本申请一些典型的实施例中,可以将中心差分卷积表示为:
Figure PCTCN2022110111-appb-000001
其中,y(.)为输出特征图,x(.)为输入的特征图,P0表示输入特征图和输出特征图的当前位置,Pn表示局部感受野R的位置,θ为θ∈[0,1]的超参数,可以用于衡量不同语义信息的权重。
在活体检测的过程中,通过使用中心差分卷积的方式对图像进行特征提取,能够从人脸图像中提取到更加丰富的语义信息,进而使活体检测的检测结果更加准确。
步骤S620:将目标融合图像输入深度生成网络,在深度生成网络中对目标融合图像处理得到目标深度信息。
具体地,可以将目标融合图像输入到深度生成网络中以生成目标深度信息。进一步地,深度生成网络可以是预先训练得到的轻量级生成器。可选地,该深度生成网络的算法复杂度可以小于立体匹配算法的算法复杂度。在该轻量级生成器G(.)中,可以对目标融合图像进行双线性上采样,接着经过Sigmoid激活函数的处理,最终生成目标深度信息。其中,目标深度信息可以表示为
Figure PCTCN2022110111-appb-000002
可选地,生成的目标深度信息可以是以伪深度图的形式呈现,伪深度图的分辨率可以为56×56×1。
请再次参考图7,将第一目标图像及第二目标图像等比例缩小、融合以及使用深度生成网络得到目标深度信息等过程可以统一在NNB网络中进行处理。也就是说,在本申请的实施例中,直接将第一目标图像及第二目标图像输入NNB网络中即可得到目标深度信息。这样模块化的处理方式可以让活体检测中的图像处理流程更加简洁。
可以理解的是,通过将第一目标图像及第二目标图像等比例缩小后进行融合,得到目标融合图像,然后将目标融合图像输入深度生成网络,在深度生成网络中对目标融合图像处理得到目标深度信息,这样使用深度生成网络对第一传感器、第二传感器采集得到的两个图像进行处理而得出的目标深度信息可以更加准确地反映待识别人脸的真实特征,进而可以使活体检测的检测结果更加真实可靠。
在本申请的一些实施方式中,可选地,在上述实施例的基础上,在使用预先训练的活体检测模型对目标深度信息进行检测得到待识别人脸的活体检测结果之前,可以通过至少两种光照环境下采集得到的样本数据对深度生成网络及活体检测模型进行训练。
请参阅图9,其示出了本申请一实施例提供的活体检测系统的训练方法的流程示意图。在这里,活体检测系统可以包括深度生成网络和活体检测模型。具体地,该训练方法可以包括如下步骤:
步骤S910:获取在至少两种光照环境下第一传感器对样本人脸进行采集而得到的第一样本图像以及第二传感器对该样本人脸进行采集而得到的第二样本图像。
其中,样本人脸包括不同材质的假体人脸。
在执行深度生成网络及活体检测模型的训练过程之前,可以预先采集训练用的样本数据。在本申请的实施例中,可以使用第一传感器及第二传感器在不同的光照环境下对同一样本人脸进行采集,从而得到第一传感器对样本人脸采集而得到的第一样本图像以及第二传感器对样本人脸采集而得到的第二样本图像作为训练用的样本数据。可选地,不同的光照环境可以包括强光、弱光、阴阳光等两种或两种以上的光照环境,也可以包括色温不同的多种光照环境,本申请实施例对此不作限制。
可选地,样本数据可以包括第一传感器及第二传感器在不同的光照环境下对样本人脸进行图像采集而得到的多个第一样本图像和多个第二样本图像。示例性地,对于样本人脸x1,可以在强光、弱光、阴阳光等至少两种光照环境下,分别对样本人脸x1进行图像采集得到多组样本数据,样本人脸x1的每组样本数据可以对应于一种光照环境。例如,对于样本人脸x1可以采集得到对应于强光环境的第一样本数据、对应于弱光环境的第二样本数据、对应于阴阳光环境的第三样本数据等等。其中,第一样本数据可以包括强光下对样本人脸x1采集得到的第一样本图像
Figure PCTCN2022110111-appb-000003
和样本人脸x1的第二样本图像
Figure PCTCN2022110111-appb-000004
第二样本数据可以包括弱光下对样本人脸x1采集得到的第一样本图像
Figure PCTCN2022110111-appb-000005
和样本人脸x1的第二样本图像
Figure PCTCN2022110111-appb-000006
以此类推,同样可以得到第三样本数据等其他至少一组样本数据。
可选地,样本数据中可以包括多个样本人脸的图像。此外,样本人脸可以包括不同材质的假体人脸,还可以包括多个真实用户的人脸。
由此,可以使样本数据更加多样化,使得训练得到的活体检测模型可以在不同光照环境下对多种人脸进行检测。
步骤S920:将第一样本图像及第二样本图像输入初始生成网络中以对初始生成网络进行训练,得到深度生成网络。
在一些实施方式中,可以使用第一样本图像及第二样本图像对预先构建的初始生成网络进行训练,以得到深度生成网络。
可选地,在一些实施例中,可以将第一样本图像及第二样本图像等比例缩小后进行融合,得到样本融合图像,然后,将样本融合图像输入初始生成网络进行训练,得到深度生成网络。与前述实施例中得到目标融合图像的过程类似地,可以采用下采样的方式同时对第一样本图像及第二样本图像进行等比例缩小,然后对等比例缩小后的两幅图像进行图像融合,得到样本融合图像。可选地,可以将第一样本图像及第二样本图像输入到包括FeatherNet与中心差分卷积模块的图像处理单元中进行处理,得到经过下采样后生成的两幅特征图。等比例缩小及融合的具体过程可以参考前述实施例中的对应内容,本申请实施例再此不在赘述。
在一些实施方式中,如图10所示,将第一样本图像及第二样本图像输入初始生成网络中以对初始生成网络进行训练得到深度生成网络的具体训练过程,可以包括如下步骤:
步骤S1010:基于第一样本图像及第二样本图像,使用立体匹配算法计算得到初始深度信息。
在本申请实施例的模型训练过程中,可以将立体匹配算法计算的初始深度信息作为监督信息来训练深度生成网络。
在一些实施方式中,第一传感器与样本人脸之间的距离与第二传感器与样本人脸之间的距离一致。可选地,第一传感器与样本人脸之间的拍摄角度也可以与第二传感器与样本人脸之间的拍摄角度一致。因此,在立体匹配算法中,可以根据第一传感器与第二传感器的固有参数以及第一传感器与第二传感器之间的视差计算得到样本人脸的初始深度信息,其中,初始深度信息可以表示第一传感器及第二传感器与样本人脸之间的直线距离。例如,初始深度信息可以包括第一传感器与第二传感器的基线中点到样本人脸上各空间点的距离信息。
如图11所示,Ol为第一传感器,Or为第一传感器,B为基线,f为焦距,P为待测的样本人脸所处的位置(例如可以是样本人脸上的一个空间点),可以将P称为目标点,D为目标点P到第一传感器及第二传感器(例如可以是两个传感器之间的基线的中点)的直线距离,xl、xr为目标点P在两个成像平面上呈现的位置,xol、xor分别为第一传感器和第二传感器的光轴与两个成像平面的交点,xol、xor可以称为像主点。若第一传感器及第二传感器的基线统一,则由相似三角原理可得:
Figure PCTCN2022110111-appb-000007
由上式可推出:
Figure PCTCN2022110111-appb-000008
基线B=B1+B2,并且,两个成像平面上的两个像素点在X轴上的差值即为视差,令视差d=xl-xr,则有:
Figure PCTCN2022110111-appb-000009
可以得到初始深度信息:
Figure PCTCN2022110111-appb-000010
在一些典型的实施例中,可以将每组样本数据(包括第一样本图像及第二样本图像)以及对应的初始深度信息组成一份训练数据。进一步地,可以将所有训练数据共同组成训练集。通过将该训练集输入到初始生成网络及神经网络模型中进行训练,可以得到深度生成网络及活体检测模型。该训练集例如可以表示为:
Figure PCTCN2022110111-appb-000011
其中,x l为第一传感器采集得到第一样本图像,x r为第二传感器采集得到第二样本图像,二者的分辨率均可以为112×112×3,二者作为网络的输入,b为经过立体匹配算法得到的初始深度信息,初始深度信息的分辨率可以设置为56×56×1。在这里,b可以作为深度标签。y为真、假两种类别的分类标签,用于表示相应的训练数据所包括的样本人脸为活体还是假体(示例性地,“真”的分类标签可以为“1”,表示活体;“假”的分类标签可以为“0”,表示假体)。其中,n为训练集中训练数据的数量。
可以理解的是,在一些典型的实施方式中,第一传感器与第二传感器可以共同属于一个双目立体传感器。其中,第一传感器可以是该双目立体传感器的左目传感器,第二传感器可以是该双目立体传感器的右目传感器。
步骤S1020:将第一样本图像及第二样本图像输入初始生成网络,并利用初始深度信息作为监督信息,对初始生成网络进行训练,得到深度生成网络,以使通过深度生成网络从第一样本图像及第二样本图像中提取的深度信息与初始深度信息之间的差异满足预设差异条件。
在一些实施方式中,可以使用训练集中每份训练数据中的第一样本图像和第二样本图像来训练初始生成网络。
在本申请的实施例中,可以将初始深度信息作为监督信息来训练初始生成网络。在一些实施方式中,可以在初始生成网络中构建损失函数来表示深度生成网络从第一样本图像及第二样本图像中提取的深度信息与初始深度信息之间的差异。通过不断调整初始生成网络的网络参数,并在每经过预设次数的调整后重新计算该损失函数,当损失函数满足预设条件(例如,损失函数的值大于预设差异阈值)时,可以将此时的初始生成网络确定为深度生成网络。
具体地,可以针对初始生成网络构建两组损失函数,一组损失函数为表示深度信息
Figure PCTCN2022110111-appb-000012
与初始深度信息B i(即训练集中的深度标签b)之间差异的交叉熵,如L1 NNB所示。另一组损失函数为相对深度损失,定义如L2 NNB所示。其中,i用于表示训练集中每份训练数据的序号。
Figure PCTCN2022110111-appb-000013
Figure PCTCN2022110111-appb-000014
其中,K contrast为一组卷积核,例如可以定义为:
Figure PCTCN2022110111-appb-000015
其中,
Figure PCTCN2022110111-appb-000016
为深度可分离卷积,j为矩阵中数字1围绕-1的位置信息。在进行相对深度损失计算时,首先需要将深度标签和深度信息通过张量的广播机制扩张到8个通道。该相对深度损失的目的是了解每个像素形成的规律,从而对当前像素到相邻像素之间的对比度进行约束。
由此,通过初始深度信息作为监督信息来训练深度生成网络,可以使深度生成网络得到的深度信息更加准确地反映出样本人脸的真实特征,从而在进行人脸活体检测时,能够得到准确而可靠的检测结果。此外,实际的活体检测场景下,采用通过深度生成网络提取目标深度信息的方式替代通过立体匹配算法计算初始深度信息的方式,可以大大减少计算资源的消耗,缩短计算时间,有效提高检测效率,从而提高活体检测的实时性。
步骤S930:利用深度生成网络,从第一样本图像及第二样本图像中提取样本人脸的深度信息。
在一些实施方式中,深度生成网络的训练过程与活体检测模型的训练过程可以同步进行,也就是说,可以利用训练集中的训练数据,同步进行深度生成网络的训练和活体检测网络的训练。
因此,在一些实施例中,可以在每次调整初始生成网络的网络参数后,使用此时的初始生成网络从第一样本图像和第二样本图像中提取深度信息,并将该深度信息输入到神经网络模型中以对神经网络模型进行训练。也就是说,初始生成网络的训练迭代过程与神经网络模型的训练迭代过程可以嵌套在一起,共同趋于收敛。应当理解,此时的初始生成网络可能还未达到训练目标,因此该深度信息可能不是最佳的深度信息,与初始深度信息间还存在较大的差异。此时,可以将每次调整过程中生成的深度信息输入到预先构建的神经网络模型中进行训练。在两个训练迭代过程均完成后,确定当前的初始生成网络和神经网络模型为深度生成网络和活体检测模型。
步骤S940:将样本人脸的深度信息输入神经网络模型中以对神经网络模型进行训练,得到活体检测模型。
具体地,可以在神经网络模型中对深度信息进行检测,得到样本人脸的活体检测结果。可选地,可以对比检测结果与预先对样本人脸进行标注的目标标签,通过对比结果来训练神经网络模型,以使通过活体检测模型得到检测结果与目标标签之间的检测差异可以满足预设检测条件。
在本申请的一些实施方式中,如图12所示,将深度信息输入神经网络模型中进行训练得到活体检测模型的具体训练过程可以包括如下步骤:
步骤S1210:将样本人脸的深度信息输入神经网络模型,得到样本人脸的活体检测分值。
其中,活体检测分值为神经网络模型将该样本人脸的分类标签确定为预先标注的目标标签的概率。
在本实施例所述的神经网络模型中,可以基于深度信息对样本人脸进行打分。其中,经过打分可以得到样本人脸的活体检测分值。由于在训练集中可以为每个样本人脸打上分类标签,因此,在训练过程中,活体检测分值可以是根据深度信息将样本人脸的分类标签确定为预先标注的分类标签(即目标标签)的概率。示例性地,可以通过计算深度信息与预先标注的分类标签所表征的深度特征之间的相似度,得到活体检测分值。
例如,对于训练集中的某份样本数据,若对该样本数据包含的样本人脸预先标注的目标标签为表征活体的分类标签(例如样本数据中的分类标签y为“1”),那么该样本人脸的活体检测分值则是神经网络模型中得出的,将该样本人脸确定为活体的概率。此时,活体检测分值例如可以通过softmax模型将深度信息与活体特征之间的相似度归一化后得到。
步骤S1220:基于活体检测分值确定检测误差。
在本申请的实施例中,在得到样本人脸的活体检测分值后,可以基于活体检测分值输出得到神经网络模型对样本人脸进行二分类而得到的分类标签,进一步地,可以计算该分类标签与预先标注的目标标签的检测误差。
步骤S1230:基于检测误差调整神经网络模型,得到活体检测模型,以使活体检测模型的检测误差满足预设误差条件。
可选地,还可以将满足预设误差条件的检测误差对应的活体检测分值确定为预设检测阈值。
进一步地,可以根据神经网络模型输出的分类标签与预先标注的目标标签构建活体检测的损失函数。具体地,可以采用FocalLoss的方式定义活体检测的损失函数L NNC,L NNC例如可以表示为
Figure PCTCN2022110111-appb-000017
其中,a t为自定义参数,
Figure PCTCN2022110111-appb-000018
为神经网络模型对第i个样本数据所包含的样本人脸输出的分类标签,Y i为第i个样本数据中的目标标签。通过活体检测的损失函数L NNC可以计算神经网络模型对训练集进行活体检测而得到检测误差。
示例性地,可以通过不断调整神经网络模型的模型参数,并在每经过预设次数的调整后重新计算活体检测的损失函数,最终得到使活体检测的损失函数满足预设条件(例如损失函数所表示的检测误差小于预设误差阈值)的神经网络模型,可以将此时的神经网络模型作为活体检测模型。此外,还可以将满足预设条件的检测误差对应的活体检测分值确定为预设检测阈值。
在一些典型的实施例中,通过神经网络模型输出得到的分类标签例如可以表示为
Figure PCTCN2022110111-appb-000019
其中,NNC(.)可以表示神经网络模型的输出。预先构建的神经网络模型可以是由瓶颈层、下采样层、流模块和全连接层组成的NNC网络。其中,深度信息可以伪深度图的形式呈现,则下采样层可以将伪深度图的分辨率采样至7×7大小。此外,全连接层可以由2个神经元和softmax激活函数组成。接着,再通过伪深度图对NNC网络进行训练。将伪深度图输入NNC网络之后,NNC网络会根据伪深度图对样本人脸进行二分类,对活体标签和假体标签分别计算得分,然后使用活体检测的损失函数L NNC计算预测的分类标签与真实的分类标签之间误差,通过调整NNC网络中各层的模型参数来优化NNC网络,减小分类误差,从而提升活体检测的准确率。
可选地,神经网络模型中的所有卷积同样可以采用中心差分卷积的方式替代。
在一些实施方式中,深度生成网络的训练过程与活体检测模型的训练过程可以同步进行,也就是说,可以统一使用目标函数来表示初始生成网络输出的深度信息与初始深度信息间的差异以及神经网络模型的检测误差。示例性地,目标函数可以表示为LOSS=L1 NNB+L2 NNB+L NNC
因此,在训练过程中,通过不断调整初始生成网络的网络参数以及神经网络模型的模型参数,并在每经过预设次数的调整后重新计算目标函数,最终可以得到使目标函数满足预设条件(例如小于目标阈值)的深度生成网络和活体检测网络。
由此,通过利用深度信息来计算样本人脸的活体检测分值,接着利用活体检测分值确定神经网络模型的检测误差,可以通过不断调整神经网络模型的模型参数,得到使检测误差满足预设误差条件的活体检测模型。此外,在调整模型参数的过程当中,还可以得到活体检测模型的预设检测阈值,因此,在使用活体检测模型对待识别人脸进行活体检测时,一旦确定目标检测分值满足预设条件(例如大于检测阈值),就可以确定该待识别人脸为活体,同时还可以将检测误差约束在合理的范围之内,达到提高活体检测的准确率的目的。
可选地,请参阅图13,其示出了本申请一实施例提供的活体检测方法中模型训练过程的流程示意图。
在本申请的一些典型的实施方式中,可以将第一传感器及第二传感器对样本人脸采集得到的第一样本图像(XL)和第二样本图像(XR)分别输入到结构相同的F-CDCN网络浅层特征提取器中进行提取特征。其中,特征提取器可以结合FeatherNet与中心差分卷积模块的功能,经过下采样等处理后,可以得到两幅特征图g(XL)及g(XR)。接着,在特征融合步骤中,可以采用特征堆叠的方式对g(XL)及g(XR)进行图像融合,得到样本融合图像Z。
进一步地,可以将样本融合图像Z输入到轻量级生成器。在轻量级生成器中,可以在初始深度信息的监督下,通过双线性上采样和Sigmoid激活函数,生成深度信息。需要说明的是,初始深度信息可以是通过立体匹配算法对第一样本图像(XL)和第二样本图像(XR)计算得到的第一传感器及第二传感器到样本人脸的距离。
可选地,深度信息可以是以伪深度图的形式呈现,初始深度信息可以是以深度图的形式呈现。其中,伪深度图的分辨率与深度图的分辨率一致,例如可以为56×56×1。
更进一步地,将深度信息输入到NNC网络中,可以在NNC网络中利用深度信息对样本人脸进行分类。其中,为了控制轻量级生成器及NNC网络输出的误差,可以在轻量级生成器中构建两组损失函数L1 NNB及L2 NNB,在NNC网络中构建一组损失函数L NNC
在一些实施方式中,可以使用目标函数统一表示活体检测系统整体的训练过程中产生的误差,其中,目标函数例如可以表示为LOSS=L1 NNB+L2 NNB+L NNC。通过不断调整轻量级生成器的网络参数以及NNC网络的模型参数,并且每次调整时计算目标函数的值,最终可以得到误差较小的轻量级生成器及NNC网络。
在本申请实施例中,通过利用第一传感器及第二传感器在不同光照环境下采集的人脸图像对特征提取器、特征融合、轻量级生成器及NNC网络这四个算法模块进行训练,同时使用立体匹配算法得到的深度信息作为监督,可以对假体人脸替换真实人脸的过程中的遮挡区域、色彩风格、以及替换过程中眉部重影等问题进行解决。
可选地,在一些典型的实施方式中,第一传感器与第二传感器可以共同组成双目立体传感器。
可以理解的是,利用第一样本图像及第二样本图像对特征提取器、特征融合、轻量级生成器及NNC网络这四个算法模块进行训练的具体过程可以参考前述实施例中的对应过程,在此不再赘述。
请参阅图14,示出了本申请一实施例提供的活体检测装置的模块框图。具体地,该装置可以包括:图像获取模块1410、深度生成模块1420以及活体检测模块1430。
其中,图像获取模块1410,用于获取第一传感器对待识别人脸采集而得到的第一目标图像,以及第二传感器对该待识别人脸采集而得到的第二目标图像;深度生成模块1420,用于利用预先训练的深度生成网络,从所述第一目标图像及所述第二目标图像中提取目标深度信息;活体检测模块1430,用于通过预先训练的活体检测模型对目标深度信息进行检测,得到所述待识别人脸的活体检测结果。
可选地,活体检测模型可以由从样本数据提取的深度信息训练而得到。样本数据可以包括在至少两种光照环境下由第一传感器采集的第一样本图像和由第二传感器采集的第二样本图像。其中,第一样本图像和第二样本图像均包括不同材质的假体人脸。
在一些实施方式中,上述活体检测模块1430可以包括:分值计算模块,用于将目标深度信息输入活体检测模型,得到待识别人脸被检测为活体的目标检测分值;分值判断模块,用于当目标检测分值满足预设检测阈值时,将该待识别人脸确定为活体。
可选地,在一些实施例中,上述深度生成模块1420可以包括:图像融合模块,用于将第一目标图像及第二目标图像进行融合,得到目标融合图像;深度生成子模块,用于将目标融合图像输入深度生成网络,在深度生成网络中对目标融合图像处理得到目标深度信息。
可选地,在一些实施例中,上述图像融合模块还可以用于将第一目标图像和第二目标图像等比例缩小,并将等比例缩小后的两张图像进行融合。
请参阅图15,示出了本申请一实施例提供的活体检测系统的训练装置的模块框图。在这里,活体检测系统可以包括深度生成网络和活体检测模型。具体地,该装置可以包括:样本获取模块1510、网络训练模块1520、深度提取模块1530以及模型训练模块1540。
其中,样本获取模块1510,用于获取在至少两种光照环境下第一传感器对样本人脸进行采集而得到的第一样本图像以及第二传感器对该样本人脸进行采集而得到的第二样本图像,其中,样本人脸包括不同材质的假体人脸;网络训练模块1520,用于将第一样本图像及第二样本图像输入初始生成网络中以对初始生成网络进行训练,得到深度生成网络;深度提取模块1530,用于利用深度生成网络,从第一样本图像及第二样本图像中提取样本人脸的深度信息;模型训练模块1540,用于将深度信息输入预先构建的神经网络模型中以对神经网络模型进行训练,得到活体检测模型。
进一步地,在前述实施例的基础上,上述网络训练模块1520可以包括:立体匹配模块,用于根据第一样本图像及第二样本图像使用立体匹配算法计算得到初始深度信息;监督模块,用于将第一样本图像及第二样本图像输入初始生成网络,并利用初始深度信息作为监督信息,对初始生成网络进行训练,得到深度生成网络,以使通过深度生成网络从第一样本图像及第二样本图像中提取的深度信息与初始深度信息间的差异满足预设差异条件。
可选地,在前述实施例的基础上,上述网络训练模块1520还可以包括:样本融合模块,用于将第一样本图像及第二样本图像进行融合,得到样本融合图像;网络训练子模块,用于将样本融合图像输入初始生成网络进行训练,得到深度生成网络。
可选地,在一些实施例中,上述样本融合模块还可以用于将第一样本图像和第二样本图像等比例缩小,并将等比例缩小后的两张图像进行融合。
在一些实施方式中,上述模型训练模块1540可以包括:样本分值计算模块,用于将深度信息输入神经网络模型,得到样本人脸的活体检测分值,其中,活体检测分值为将样本人脸的分类标签确定为预先标注的目标标签的概率;误差确定模块,用于基于活体检测分值确定检测误差,并基于检测误差调整神经网络模型,得到所述活体检测模型,以使所述活体检测模型的检测误差满足预设误差条件。
可选地,上述模型训练模块1540还可以包括:模型训练子模块,用于将满足预设误差条件的检测误差对应的活体检测分值确定为预设检测阈值。
在一些典型的实施方式中,第一传感器与第二传感器共同组成双目立体传感器。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述装置中模块/单元/子单元/组件的具体工作过程和技术效果,可以参考前述方法实施例中的对应过程和描述,在此不再赘述。
在本申请所提供的几个实施例中,所显示或讨论的模块相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
请参阅图16,其示出了本申请一实施例提供的电子设备的结构框图。本实施例中的所述电子设备可以包括一个或多个如下部件:处理器1610、存储器1620以及一个或多个应用程序,其中一个或多个应用程序可以被存储在存储器1620中并被配置为由一个或多个处理器1610执行,一个或多个应用程序配置用于执行如前述方法实施例所描述的方法。
其中,电子设备可以为移动、便携式并执行无线通信的各种类型的计算机系统设备中的任何一种。具体的,电子设备可以为移动电话或智能电话(例如,基于iPhone TM,基于Android TM的电话)、便携式游戏设备(例如Nintendo DS TM,PlayStation Portable TM,Gameboy Advance TM,iPhone TM)、膝上型电 脑、PDA、便携式互联网设备、音乐播放器以及数据存储设备,其他手持设备以及诸如智能手表、智能手环、耳机、吊坠等,电子设备还可以为其他的可穿戴设备(例如,诸如电子眼镜、电子衣服、电子手镯、电子项链、电子纹身、电子设备或头戴式设备(HMD))。
电子设备还可以是多个电子设备中的任何一个,多个电子设备包括但不限于蜂窝电话、智能电话、智能手表、智能手环、其他无线通信设备、个人数字助理、音频播放器、其他媒体播放器、音乐记录器、录像机、照相机、其他媒体记录器、收音机、医疗设备、车辆运输仪器、计算器、可编程遥控器、寻呼机、膝上型计算机、台式计算机、打印机、上网本电脑、个人数字助理(PDA)、便携式多媒体播放器(PMP)、运动图像专家组(MPEG-1或MPEG-2)音频层3(MP3)播放器,便携式医疗设备以及数码相机及其组合。
在一些情况下,电子设备可以执行多种功能(例如,播放音乐,显示视频,存储图片以及接收和发送电话呼叫)。如果需要,电子设备可以是诸如蜂窝电话、媒体播放器、其他手持设备、腕表设备、吊坠设备、听筒设备或其他紧凑型便携式设备。
可选地,电子设备也可以是服务器,例如可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN(Content Delivery Network,内容分发网络)、以及大数据和人工智能平台等基础云计算服务的云服务器,还可以是提供人脸识别、自动驾驶、工业互联网服务、数据通信(如4G、5G等)等专门或平台服务器。
处理器1610可以包括一个或者多个处理核。处理器1610利用各种接口和线路连接整个电子设备内的各个部分,通过运行或执行存储在存储器1620内的指令、应用程序、代码集或指令集,以及调用存储在存储器1620内的数据,执行电子设备的各种功能和处理数据。可选地,处理器1610可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器1610可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器1610中,单独通过一块通信芯片进行实现。
存储器1620可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器1620可用于存储指令、应用程序、代码、代码集或指令集。存储器1620可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可以电子设备在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的电子设备的处理器1610、存储器1620的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
请参考图17,其示出了本申请一实施例提供的计算机可读存储介质的结构框图。该计算机可读存储介质1700中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质1700可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质1700包括非易失性计算机可读存储介质(non-transitory computer-readable storage medium)。计算机可读存储介质1700具有执行上述方法中的任何方法步骤的程序代码1710的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码1710可以例如以适当形式进行压缩。其中,计算机可读存储介质1700可以是如只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、SSD、带电可擦可编程只读存储器(Electrically Erasable Programmable read only memory,简称EEPROM)或快闪存储器(Flash Memory,简称Flash)等。
在一些实施例中,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述各方法实施例中的步骤。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、SSD、Flash)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例的方法。
本申请提供一种活体检测方法、装置、电子设备及存储介质。具体地,本申请可以获取第一传感器对待识别人脸采集得到的第一目标图像以及第二传感器对同一个待识别人脸采集得到的第二目标图像,利用预先训练的深度生成网络从第一目标图像和第二目标图像中提取目标深度信息,接着通过预先训练的活体检测模型对目标深度信息进行检测,从而得到待识别人脸的活体检测结果。其中,活体检测模型由从样本数据提取的深度信息训练而得到,样本数据包括,在至少两种光照环境下,第一传感器采集的第一样本图像和第二传感器采集的第二样本图像,并且第一样本图像和第二样本图像均包括不同材质的假体人脸。基于此,本申请通过深度生成网络从第一传感器和第二传感器采集的两个图像提取目标深度信息,再利用活体检测模型对目标深度信息进行检测从而得到活体检测结果,可以大大减少计算资源的消耗,缩短计算时间,有效提高检测效率,显著提高活体检测的实时性,尤其适用于实际的活体检测场景。同时,本申请提供的活体检测方法可以识别出不同光照环境下不同材质的假体人脸,从而使活体检测的准确率更高。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (20)

  1. 一种活体检测方法,其特征在于,包括:
    获取第一传感器对待识别人脸采集而得到的第一目标图像,获取第二传感器对所述待识别人脸采集而得到的第二目标图像;
    利用预先训练的深度生成网络,从所述第一目标图像及所述第二目标图像中提取目标深度信息;
    通过预先训练的活体检测模型对所述目标深度信息进行检测,得到所述待识别人脸的活体检测结果,其中,所述活体检测模型由从样本数据提取的深度信息训练而得到,所述样本数据包括在至少两种光照环境下,所述第一传感器采集的第一样本图像和所述第二传感器采集的第二样本图像,其中,所述第一样本图像和所述第二样本图像均包括不同材质的假体人脸。
  2. 根据权利要求1所述的活体检测方法,其特征在于,所述利用预先训练的深度生成网络,从所述第一目标图像及所述第二目标图像中提取目标深度信息,包括:
    将所述第一目标图像及所述第二目标图像进行融合,得到目标融合图像;
    将所述目标融合图像输入所述深度生成网络,在所述深度生成网络中对所述目标融合图像处理得到所述目标深度信息。
  3. 根据权利要求2所述的活体检测方法,其特征在于,所述将所述第一目标图像及所述第二目标图像进行融合,得到目标融合图像,包括:
    将所述第一目标图像及所述第二目标图像等比例缩小后进行融合,得到所述目标融合图像。
  4. 根据权利要求1所述的活体检测方法,其特征在于,
    所述第一传感器和所述第二传感器为双目立体传感器上的左目传感器和右目传感器。
  5. 根据权利要求1-4任一项所述的活体检测方法,其特征在于,
    所述第一目标图像及所述第二目标图像均为可见光图像。
  6. 根据权利要求1所述的活体检测方法,其特征在于,还包括:
    将所述第一样本图像及所述第二样本图像输入初始生成网络中以对所述初始生成网络进行训练,得到所述深度生成网络;
    利用所述深度生成网络,从所述第一样本图像及所述第二样本图像中提取所述深度信息;
    将所述深度信息输入神经网络模型中以对所述神经网络模型进行训练,得到所述活体检测模型。
  7. 根据权利要求6所述的活体检测方法,其特征在于,所述将所述第一样本图像及所述第二样本图像输入初始生成网络中以对所述初始生成网络进行训练,得到所述深度生成网络,包括:
    基于所述第一样本图像及所述第二样本图像,使用立体匹配算法计算得到初始深度信息;
    将所述第一样本图像及所述第二样本图像输入所述初始生成网络,并利用所述初始深度信息作为监督信息,对所述初始生成网络进行训练,得到所述深度生成网络,以使通过所述深度生成网络从所述第一样本图像及所述第二样本图像中提取的所述深度信息与所述初始深度信息之间的差异满足预设差异条件。
  8. 根据权利要求6所述的活体检测方法,其特征在于,所述将所述第一样本图像及所述第二样本图像输入初始生成网络中以对所述初始生成网络进行训练,得到所述深度生成网络,包括:
    将所述第一样本图像及所述第二样本图像等比例缩小后进行融合,得到样本融合图像;
    将所述样本融合图像输入所述初始生成网络以对所述初始生成网络进行训练,得到所述深度生成网络。
  9. 根据权利要求6所述的活体检测方法,其特征在于,所述将所述深度信息输入神经网络模型中以对所述神经网络模型进行训练,得到所述活体检测模型,包括:
    将所述深度信息输入所述神经网络模型,得到所述样本人脸的活体检测分值,其中,所述活体检测分值为所述神经网络模型将所述样本人脸的分类标签确定为预先标注的目标标签的概率;
    基于所述活体检测分值确定检测误差;
    基于所述检测误差调整所述神经网络模型,得到所述活体检测模型,以使所述活体检测模型的检测误差满足预设误差条件。
  10. 一种活体检测系统的训练方法,其特征在于,所述活体检测系统包括深度生成网络和活体检测模型,所述训练方法包括:
    获取在至少两种光照环境下第一传感器对样本人脸采集而得到的第一样本图像以及第二传感器对所述样本人脸采集而得到的第二样本图像,其中,所述样本人脸包括不同材质的假体人脸;
    将所述第一样本图像及所述第二样本图像输入初始生成网络中以对所述初始生成网络进行训练,得到所述深度生成网络;
    利用所述深度生成网络,从所述第一样本图像及所述第二样本图像中提取所述样本人脸的深度信息;
    将所述样本人脸的深度信息输入神经网络模型中以对所述神经网络模型进行训练,得到所述活体检测模型。
  11. 根据权利要求10所述的训练方法,其特征在于,
    所述第一样本图像和所述第二样本图像均为可见光图像。
  12. 根据权利要求10所述的训练方法,其特征在于,所述将所述第一样本图像及所述第二样本图像输入初始生成网络中以对所述初始生成网络进行训练,得到所述深度生成网络,包括:
    将所述第一样本图像及所述第二样本图像进行融合,得到样本融合图像;
    将所述样本融合图像输入所述初始生成网络以对所述初始生成网络进行训练,得到所述深度生成网络。
  13. 根据权利要求12所述的训练方法,其特征在于,所述将所述第一样本图像及所述第二样本图像进行融合,得到样本融合图像,包括:
    将所述第一样本图像及所述第二样本图像等比例缩小后进行融合,得到所述样本融合图像。
  14. 根据权利要求10所述的训练方法,其特征在于,所述将所述第一样本图像及所述第二样本图像输入初始生成网络中以对所述初始生成网络进行训练,得到所述深度生成网络,包括:
    基于所述第一样本图像及所述第二样本图像,使用立体匹配算法计算得到初始深度信息;
    将所述第一样本图像及所述第二样本图像输入所述初始生成网络,并以所述初始深度信息作为监督信息,对所述初始生成网络进行训练,得到所述深度生成网络,以使通过所述深度生成网络从所述第一样本图像及所述第二样本图像中提取的深度信息与所述初始深度信息之间的差异满足预设差异条件。
  15. 根据权利要求10所述的训练方法,其特征在于,所述将所述样本人脸的深度信息输入神经网络模型中以对所述神经网络模型进行训练,得到活体检测模型,包括:
    将所述样本人脸的深度信息输入所述神经网络模型,得到所述样本人脸的活体检测分值,其中,所述活体检测分值为所述神经网络模型将所述样本人脸的分类标签确定为预先标注的目标标签的概率;
    基于所述活体检测分值确定检测误差;
    基于所述检测误差调整所述神经网络模型,当所述检测误差满足预设误差条件时,确定当前的神经网络模型为所述活体检测模型。
  16. 一种活体检测装置,其特征在于,包括:
    图像获取模块,用于获取第一传感器对待识别人脸采集而得到的第一目标图像,获取第二传感器对所述待识别人脸采集而得到的第二目标图像;
    深度生成模块,用于利用预先训练的深度生成网络,从所述第一目标图像及所述第二目标图像中提取目标深度信息;
    活体检测模块,用于通过预先训练的活体检测模型对所述目标深度信息进行检测,得到所述待识别人脸的活体检测结果,其中,所述活体检测模型由从样本数据提取的深度信息训练而得到,所述样本数据包括在至少两种光照环境下,所述第一传感器采集的第一样本图像和所述第二传感器采集的第二样本图像,其中,所述第一样本图像和所述第二样本图像均包括不同材质的假体人脸。
  17. 一种活体检测系统的训练装置,其特征在于,所述活体检测系统包括深度生成网络和活体检测模型,所述训练装置包括:
    样本获取模块,用于获取在至少两种光照环境下第一传感器对样本人脸采集而得到的第一样本图像以及第二传感器对所述样本人脸采集而得到的第二样本图像,其中,所述样本人脸包括不同材质的假体人脸;
    网络训练模块,用于将所述第一样本图像及所述第二样本图像输入初始生成网络中以对所述初始生成网络进行训练,得到深度生成网络;
    深度提取模块,用于利用所述深度生成网络,从所述第一样本图像及所述第二样本图像中提取所述样本人脸的深度信息;
    模型训练模块,用于将所述样本人脸的深度信息输入神经网络模型中以对所述神经网络模型进行训练,得到活体检测模型。
  18. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器;
    一个或多个程序,其中所述一个或多个程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行如权利要求1至9任一项所述的活体检测方法或者如权利要求10至15任一项所述的训练方法。
  19. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有程序代码,所述程序代码可被处理器调用执行如权利要求1至9任一项所述的活体检测方法或者如权利要求10至15任一项所述的训练方法。
  20. 一种包含指令的计算机程序产品,其特征在于,所述计算机程序产品中存储有指令,当其在计算机上运行时,使得计算机实现如权利要求1至9任一项所述的活体检测方法或者如权利要求10至15任一项所述的训练方法。
PCT/CN2022/110111 2021-12-01 2022-08-03 活体检测方法及装置、活体检测系统的训练方法及装置 WO2023098128A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/568,910 US20240282149A1 (en) 2021-12-01 2022-08-03 Liveness detection method and apparatus, and training method and apparatus for liveness detection system
EP22899948.8A EP4345777A1 (en) 2021-12-01 2022-08-03 Living body detection method and apparatus, and training method and apparatus for living body detection system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111454390.6 2021-12-01
CN202111454390.6A CN114333078B (zh) 2021-12-01 2021-12-01 活体检测方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023098128A1 true WO2023098128A1 (zh) 2023-06-08

Family

ID=81049548

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/110111 WO2023098128A1 (zh) 2021-12-01 2022-08-03 活体检测方法及装置、活体检测系统的训练方法及装置

Country Status (4)

Country Link
US (1) US20240282149A1 (zh)
EP (1) EP4345777A1 (zh)
CN (1) CN114333078B (zh)
WO (1) WO2023098128A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576791A (zh) * 2024-01-17 2024-02-20 杭州魔点科技有限公司 基于生机线索和垂直领域大模型范式的活体检测方法
CN117688538A (zh) * 2023-12-13 2024-03-12 上海深感数字科技有限公司 一种基于数字身份安全防范的互动教育管理方法及系统

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333078B (zh) * 2021-12-01 2024-07-23 马上消费金融股份有限公司 活体检测方法、装置、电子设备及存储介质
CN114841340B (zh) * 2022-04-22 2023-07-28 马上消费金融股份有限公司 深度伪造算法的识别方法、装置、电子设备及存储介质
CN114842399B (zh) * 2022-05-23 2023-07-25 马上消费金融股份有限公司 视频检测方法、视频检测模型的训练方法及装置
CN115116147B (zh) * 2022-06-06 2023-08-08 马上消费金融股份有限公司 图像识别、模型训练、活体检测方法及相关装置
JP7450668B2 (ja) 2022-06-30 2024-03-15 維沃移動通信有限公司 顔認識方法、装置、システム、電子機器および読み取り可能記憶媒体
CN115131572A (zh) * 2022-08-25 2022-09-30 深圳比特微电子科技有限公司 一种图像特征提取方法、装置和可读存储介质
CN116132084A (zh) * 2022-09-20 2023-05-16 马上消费金融股份有限公司 视频流处理方法、装置及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034102A (zh) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 人脸活体检测方法、装置、设备及存储介质
CN110765923A (zh) * 2019-10-18 2020-02-07 腾讯科技(深圳)有限公司 一种人脸活体检测方法、装置、设备及存储介质
CN111310724A (zh) * 2020-03-12 2020-06-19 苏州科达科技股份有限公司 基于深度学习的活体检测方法、装置、存储介质及设备
CN112200057A (zh) * 2020-09-30 2021-01-08 汉王科技股份有限公司 人脸活体检测方法、装置、电子设备及存储介质
CN113505682A (zh) * 2021-07-02 2021-10-15 杭州萤石软件有限公司 活体检测方法及装置
CN114333078A (zh) * 2021-12-01 2022-04-12 马上消费金融股份有限公司 活体检测方法、装置、电子设备及存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764069B (zh) * 2018-05-10 2022-01-14 北京市商汤科技开发有限公司 活体检测方法及装置
US10956714B2 (en) * 2018-05-18 2021-03-23 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting living body, electronic device, and storage medium
CN112464690A (zh) * 2019-09-06 2021-03-09 广州虎牙科技有限公司 活体识别方法、装置、电子设备及可读存储介质
CN111091063B (zh) * 2019-11-20 2023-12-29 北京迈格威科技有限公司 活体检测方法、装置及系统
CN110909693B (zh) * 2019-11-27 2023-06-20 深圳华付技术股份有限公司 3d人脸活体检测方法、装置、计算机设备及存储介质
CN111597938B (zh) * 2020-05-07 2022-02-22 马上消费金融股份有限公司 活体检测、模型训练方法及装置
CN111597944B (zh) * 2020-05-11 2022-11-15 腾讯科技(深圳)有限公司 活体检测方法、装置、计算机设备及存储介质
CN111814589A (zh) * 2020-06-18 2020-10-23 浙江大华技术股份有限公司 部位识别方法以及相关设备、装置
CN111767879A (zh) * 2020-07-03 2020-10-13 北京视甄智能科技有限公司 一种活体检测方法
CN112200056B (zh) * 2020-09-30 2023-04-18 汉王科技股份有限公司 人脸活体检测方法、装置、电子设备及存储介质
CN113128481A (zh) * 2021-05-19 2021-07-16 济南博观智能科技有限公司 一种人脸活体检测方法、装置、设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034102A (zh) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 人脸活体检测方法、装置、设备及存储介质
CN110765923A (zh) * 2019-10-18 2020-02-07 腾讯科技(深圳)有限公司 一种人脸活体检测方法、装置、设备及存储介质
CN111310724A (zh) * 2020-03-12 2020-06-19 苏州科达科技股份有限公司 基于深度学习的活体检测方法、装置、存储介质及设备
CN112200057A (zh) * 2020-09-30 2021-01-08 汉王科技股份有限公司 人脸活体检测方法、装置、电子设备及存储介质
CN113505682A (zh) * 2021-07-02 2021-10-15 杭州萤石软件有限公司 活体检测方法及装置
CN114333078A (zh) * 2021-12-01 2022-04-12 马上消费金融股份有限公司 活体检测方法、装置、电子设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZITONG YU; JUN WAN; YUNXIAO QIN; XIAOBAI LI; STAN Z. LI; GUOYING ZHAO: "NAS-FAS: Static-Dynamic Central Difference Network Search for Face Anti-Spoofing", ARXIV.ORG, 3 November 2020 (2020-11-03), XP081807406, DOI: 10.1109/TPAMI.2020.3036338 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117688538A (zh) * 2023-12-13 2024-03-12 上海深感数字科技有限公司 一种基于数字身份安全防范的互动教育管理方法及系统
CN117688538B (zh) * 2023-12-13 2024-06-07 上海深感数字科技有限公司 一种基于数字身份安全防范的互动教育管理方法及系统
CN117576791A (zh) * 2024-01-17 2024-02-20 杭州魔点科技有限公司 基于生机线索和垂直领域大模型范式的活体检测方法
CN117576791B (zh) * 2024-01-17 2024-04-30 杭州魔点科技有限公司 基于生机线索和垂直领域大模型范式的活体检测方法

Also Published As

Publication number Publication date
CN114333078A (zh) 2022-04-12
CN114333078B (zh) 2024-07-23
US20240282149A1 (en) 2024-08-22
EP4345777A1 (en) 2024-04-03

Similar Documents

Publication Publication Date Title
WO2023098128A1 (zh) 活体检测方法及装置、活体检测系统的训练方法及装置
US10997787B2 (en) 3D hand shape and pose estimation
JP7130057B2 (ja) 手部キーポイント認識モデルの訓練方法及びその装置、手部キーポイントの認識方法及びその装置、並びにコンピュータプログラム
CN111652121B (zh) 一种表情迁移模型的训练方法、表情迁移的方法及装置
WO2020103700A1 (zh) 一种基于微表情的图像识别方法、装置以及相关设备
Houshmand et al. Facial expression recognition under partial occlusion from virtual reality headsets based on transfer learning
CN113395542B (zh) 基于人工智能的视频生成方法、装置、计算机设备及介质
CN109753875A (zh) 基于人脸属性感知损失的人脸识别方法、装置与电子设备
US20220309836A1 (en) Ai-based face recognition method and apparatus, device, and medium
CN111444826B (zh) 视频检测方法、装置、存储介质及计算机设备
CN111598168B (zh) 图像分类方法、装置、计算机设备及介质
CN111046734A (zh) 基于膨胀卷积的多模态融合视线估计方法
US12080098B2 (en) Method and device for training multi-task recognition model and computer-readable storage medium
WO2023178906A1 (zh) 活体检测方法及装置、电子设备、存储介质、计算机程序、计算机程序产品
CN113298018A (zh) 基于光流场和脸部肌肉运动的假脸视频检测方法及装置
US20230281833A1 (en) Facial image processing method and apparatus, device, and storage medium
CN112257513A (zh) 一种手语视频翻译模型的训练方法、翻译方法及系统
CN113516665A (zh) 图像分割模型的训练方法、图像分割方法、装置、设备
Shehada et al. A lightweight facial emotion recognition system using partial transfer learning for visually impaired people
CN117237547B (zh) 图像重建方法、重建模型的处理方法和装置
CN112528760B (zh) 图像处理方法、装置、计算机设备及介质
CN117540007A (zh) 基于相似模态补全的多模态情感分析方法、系统和设备
CN110866508B (zh) 识别目标对象的形态的方法、装置、终端及存储介质
WO2024059374A1 (en) User authentication based on three-dimensional face modeling using partial face images
CN116959123A (zh) 一种人脸活体检测方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22899948

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18568910

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2022899948

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022899948

Country of ref document: EP

Effective date: 20231226

NENP Non-entry into the national phase

Ref country code: DE