WO2021114916A1 - Risk detection method, apparatus and device - Google Patents

Risk detection method, apparatus and device Download PDF

Info

Publication number
WO2021114916A1
WO2021114916A1 PCT/CN2020/124141 CN2020124141W WO2021114916A1 WO 2021114916 A1 WO2021114916 A1 WO 2021114916A1 CN 2020124141 W CN2020124141 W CN 2020124141W WO 2021114916 A1 WO2021114916 A1 WO 2021114916A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection
images
trained
camera
user
Prior art date
Application number
PCT/CN2020/124141
Other languages
French (fr)
Chinese (zh)
Inventor
曹佳炯
Original Assignee
支付宝(杭州)信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 支付宝(杭州)信息技术有限公司 filed Critical 支付宝(杭州)信息技术有限公司
Publication of WO2021114916A1 publication Critical patent/WO2021114916A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • This document relates to the field of data processing technology, in particular to a risk detection method, device and equipment.
  • binocular cameras are widely used in various security scenarios, such as access control, security, etc., to detect two images obtained by a single shot of the binocular camera. Avoid live attacks.
  • the effective imaging area of a binocular camera is the intersection of the imaging areas of the two cameras, and the area outside the effective imaging area is called a blind zone. And only one camera can collect the image of the blind area, and the other camera cannot collect the image of the blind area. Because the blind zone of the binocular camera has this feature, there is still a risk of live attack.
  • the purpose of one or more embodiments of this specification is to provide a risk detection method, device, and equipment, which combine consistency detection and liveness detection to determine whether the liveness detection of the user to be detected is at risk of being attacked, which not only solves the problem
  • the problem of live attacks in the blind area of multi-camera cameras is solved, and the security is greatly improved.
  • One or more embodiments of this specification provide a risk detection, including: acquiring a combination of biometric images of a user to be detected, wherein the combination of biometric images includes: a multi-lens camera on a designated body of the user to be detected Multiple images obtained from a single shot of the part; the first detection model is used to perform consistency detection on the multiple images to obtain the first detection result; and the second detection model is pre-trained to obtain the first detection result; Perform live detection on the multiple images to obtain a second detection result; according to the first detection result and the second detection result, determine whether the live detection of the user to be detected is at risk of being attacked.
  • One or more embodiments of this specification provide a risk detection device, including: an acquisition module that acquires a combination of biometric images of a user to be detected, wherein the combination of biometric images includes: A plurality of images obtained by a single shot of a designated body part of the user to be detected; a first training module, which uses a pre-trained first detection model to perform consistency detection on the plurality of images to obtain a first detection result; and , Performing live detection on the multiple images through a pre-trained second detection model to obtain a second detection result; a determining module, which determines whether there is a risk of attack based on the first detection result and the second detection result .
  • One or more embodiments of this specification provide a risk detection device, including: a processor; and, a memory arranged to store computer-executable instructions, which when executed, cause the processor to: Acquire a biometric image combination of the user to be detected, where the biometric image combination includes: multiple images obtained by a single shot of a designated body part of the user to be detected by a multi-lens camera; A detection model, which performs consistency detection on the multiple images to obtain a first detection result; and, through a pre-trained second detection model, performs a living detection on the multiple images to obtain a second detection result; The first detection result and the second detection result determine whether the live detection of the user to be detected is at risk of being attacked.
  • One or more embodiments of this specification provide a storage medium for storing computer-executable instructions that, when executed, realize the following process: acquiring a combination of biometric images of a user to be detected, where all The biometric image combination includes: multiple images obtained by a single shot of the designated body part of the user to be detected by a multi-lens camera; and the consistency of the multiple images is detected through a pre-trained first detection model , Obtain a first detection result; and, perform a living detection on the plurality of images through a pre-trained second detection model to obtain a second detection result; determine according to the first detection result and the second detection result Whether the live detection of the user to be detected is at risk of being attacked.
  • An embodiment of this specification is based on the pre-trained first detection model and the second detection model to detect the biometric image combination of the user to be detected; thus, the consistency detection and the living body detection are combined to determine the living body of the user to be detected Detect whether there is a risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-eye camera, but also ensures the detection of live attacks in the blind area of the multi-eye camera, which not only solves the live attack in the blind area of the multi-eye camera Problems, and greatly improved security.
  • FIG. 1 is a schematic diagram of a scenario of a risk detection method provided by one or more embodiments of this specification.
  • FIG. 2 is a schematic diagram of the first flow of a risk detection method provided by one or more embodiments of this specification.
  • FIG. 3 is a schematic diagram of the second flow of a risk detection method provided by one or more embodiments of this specification.
  • FIG. 4 is a detailed diagram of step S100-4 provided by one or more embodiments of this specification.
  • FIG. 5 is a detailed diagram of step S100-6 provided by one or more embodiments of this specification.
  • Fig. 6 is a detailed diagram of step A2 provided in one or more embodiments of this specification.
  • FIG. 7 is a schematic diagram of the module composition of a risk detection device provided by one or more embodiments of this specification.
  • Fig. 8 is a schematic structural diagram of a risk detection device provided by one or more embodiments of this specification.
  • Face recognition is not unfamiliar to people.
  • the risk in the process of face recognition lies mainly in live attacks, that is, an attacker uses photos, videos and other imitation real people to brush their faces in an attempt to pass face recognition.
  • the use of binocular cameras to collect face images, and the color images (RGB images) and near-infrared images (IR images) obtained from a single collection for live detection are the current mainstream anti-attack measures.
  • the binocular camera has a blind spot for shooting, and the blind spot has the characteristics that one camera can collect its image, and the other camera cannot collect its image, it has become a breakthrough for the attacker to carry out a live attack, thus making the binocular camera
  • the live body detection is downgraded to the live body detection of the monocular camera, and the success rate of the attack is greatly improved.
  • one or more embodiments of this specification provide a risk detection method, device, and equipment, which can detect whether there is a risk of being attacked in the live detection of the user to be detected.
  • the first detection model and the first detection model are trained in advance.
  • the second detection model, and the first detection model is used to perform consistency detection on the biometric image combination of the user to be detected to obtain the first detection result; and the second detection model is used to perform the living body detection on the biometric image combination of the user to be detected to obtain the first detection model.
  • the biometric image combination includes: multiple images obtained by a single shot of a designated body part of the user to be detected by a multi-lens camera; designated body parts such as iris, human face, and so on.
  • Figure 1 is a schematic diagram of an application scenario of the risk detection method provided by one or more embodiments of this specification.
  • the scenario includes: a multi-camera and a risk detection device, where the multi-camera includes multiple cameras; risk
  • the detection device may be a device independent of the multi-lens camera, or may be a device deployed in the multi-lens camera.
  • the multi-lens camera takes a single shot of the designated body part of the user to be detected to obtain a biometric image combination including multiple images; the risk detection device obtains the biometric image combination of the user to be detected, and trains the first detection model in advance, Perform consistency detection on the multiple images to obtain a first detection result; and, perform live detection on the multiple images through a pre-trained second detection model to obtain a second detection result; according to the obtained first detection result and The second detection result determines whether the live detection of the user to be detected is at risk of being attacked.
  • the consistency detection and the live detection are combined to solve the problem of live attacks in the blind area of the multi-eye camera, and greatly improve the security.
  • FIG. 2 is a schematic flowchart of a risk detection method provided by one or more embodiments of this specification, and the method in FIG. 2 can be modified by The risk detection device in FIG. 1 is executed. As shown in FIG. 2, the method includes steps S102-S106.
  • Step S102 Acquire a biometric image combination of the user to be detected, where the biometric image combination includes: multiple images obtained by a single shot of a designated body part of the user to be detected by a multi-camera; wherein the designated body part may be Iris, human face, etc.
  • Step S104 Perform consistency detection on the acquired multiple images through the pre-trained first detection model to obtain a first detection result; and, through the pre-trained second detection model, perform live detection on the multiple acquired images, Obtain the second test result.
  • the first detection model is used to detect the consistency of multiple images included in the combination of biometric images; due to the blind spot attack, the difference between multiple images taken by multiple cameras in the multi-camera is relatively large, so The first detection model is used to perform consistency detection on the multiple images included in the biometric image combination, which can effectively intercept the blind spot attack; wherein the training process of the first detection model is described in detail later.
  • the second detection model is used to perform live detection on multiple images included in the combination of biometric images to intercept live attacks in the effective imaging area; the second detection model can be the same or different from the existing live detection model, due to live detection
  • the training process of the detection model is a well-known technical means to those skilled in the art. Therefore, the training process of the second detection model will not be described in detail in the embodiment of this specification.
  • Step S106 According to the first detection result and the second detection result, it is determined whether the live detection of the user to be detected is at risk of being attacked.
  • the biometric image combination of the user to be detected is detected; thus, the consistency detection and the living body detection are combined to determine Whether the live detection of the user to be detected is at risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-camera camera, but also ensures the detection of live attacks in the blind area of the multi-camera camera. The problem of live attacks in the blind area of the camera, and greatly improves the security.
  • step S102 In order to realize the consistency detection of the multiple images included in the biometric image combination, in one or more embodiments of this specification, as shown in FIG. 3, before step S102, S100-2 to S100-6 are further included.
  • Step S100-2 Obtain the biometric image combination to be trained collected by the multi-lens camera.
  • step S100-4 a second preprocessing is performed on the biometric image combination to be trained to obtain a sample set to be trained.
  • step S100-6 the first detection model is trained based on the sample set.
  • step S100-3 is further included before step S100-4.
  • Step S100-3 Perform calibration processing on multiple cameras in the multi-camera to obtain a conversion matrix
  • a camera is randomly selected from the multiple cameras included in the multi-camera as the reference camera, and the cameras other than the reference camera are used as the camera to be calibrated; the black and white grid calibration board is used to solve the reference camera and each camera to be calibrated.
  • the internal parameter matrix and the external parameter matrix between the cameras; according to the external parameter matrix, the two images collected by the corresponding reference camera and the camera to be calibrated are converted from the world coordinate system to the camera coordinate system; according to the corresponding internal parameter matrix, Convert the two images from the camera coordinate system to the pixel coordinate system; determine the first coordinate of the same pixel in the two images in the world coordinate system and the second coordinate in the pixel coordinate system; The coordinates are converted into a conversion matrix of the second coordinate, and the determined conversion matrix is used as the conversion matrix between the reference camera and the corresponding camera to be calibrated, and each obtained conversion matrix is used as the conversion matrix of the multi-lens camera.
  • the process of solving the internal parameter matrix and the external parameter matrix between the reference camera and the camera to be calibrated, and then determining the conversion matrix between the two cameras according to the internal parameter matrix and the external parameter matrix can refer to the calibration of the existing binocular camera The process will not be described in further detail in the embodiments of this specification.
  • the reference camera after selecting the reference camera, it further includes: recording the camera information of the reference camera; and, after obtaining the conversion matrix of the multi-view camera, it further includes: saving the conversion matrix; in order to achieve consistency in the biometric image combination of the user to be detected During detection, determine the reference image in the biometric image combination of the user to be detected according to the recorded camera information of the reference camera, and perform spatial alignment processing on the image to be aligned in the biometric image combination of the user to be detected according to the saved conversion matrix.
  • step S100-4 includes step S100-4-2 to step S100-4-6.
  • Step S100-4-2 according to the conversion matrix, perform spatial alignment processing on the images in the biometric image combination to be trained.
  • the corresponding image to be aligned is spatially aligned to align it with the reference image.
  • the image name of each image includes the camera identification of the corresponding camera; correspondingly, determining the one-to-one correspondence between multiple images and multiple cameras includes: determining according to the camera identification included in the image name of each image The one-to-one correspondence between each image and the camera; or, the multiple cameras included in the multi-lens camera respectively establish a transmission channel with the risk detection device in advance, and each camera sends the captured image to the risk detection device through the corresponding transmission channel; corresponding , Determining the one-to-one correspondence between multiple images and multiple cameras includes: obtaining the corresponding camera identification from the corresponding relationship between the channel identification and the camera identification according to the channel identification of the transmission channel through which each image is received; the acquired camera identification The corresponding camera is determined as the camera corresponding to the corresponding image.
  • step S100-4-4 the biometric image combination after the spatial alignment processing is used as a sample, and the sample is divided into a positive sample and a negative sample; wherein, the images in the positive sample are consistent, and the images in the negative sample are inconsistent.
  • the images in the biometric image combination after spatial alignment are the same.
  • the samples are divided into positive samples and negative samples in step S100-4-4, including: determining whether the multiple images included in each sample are consistent, and if they are consistent, the corresponding The sample of is determined as a positive sample; if it is inconsistent, the corresponding sample is determined as a negative sample.
  • step S100-4- The sample is divided into positive samples and negative samples in 4, which may also include: cross-combining different samples to obtain negative samples; wherein, the negative samples obtained by the cross-combination include one-to-one correspondence with multiple cameras and at different times. Multiple images taken.
  • the multi-lens camera is a quadruple camera.
  • the four cameras included in the quadruple camera are recorded as camera 1, camera 2, camera 3, and camera 4; and the images captured by each camera are correspondingly recorded as Image 1, image 2, image 3, image 4; image 2 in sample 1 can be exchanged with image 2 in sample 2 to obtain new sample 1 and new sample 2, and combine the new sample 1 and new sample 1
  • the sample 2 of sample 2 is determined to be a negative sample; you can also replace image 2 in sample 1 with image 2 in sample 2, and replace image 3 in sample 1 with image 3 in sample 4 to obtain a new sample 1, and
  • the new sample 1 is determined to be a negative sample; it should be pointed out that the cross-combination method can be set according to actual needs.
  • step S100-4-6 the positive sample and the negative sample are determined as the sample set to be trained.
  • step S100-6 includes step S100-6-2 to step S100-6-6.
  • S100-6-2 Divide the sample set into a training set and a test set; wherein the training set and the test set include the same proportion of positive samples and negative samples.
  • a preset ratio of positive samples to negative samples randomly select a first number of positive samples and a second number of negative samples from the positive samples included in the sample set, and determine the selected samples as the training set; and , Randomly select the third number of positive samples and the fourth number of negative samples from the positive samples included in the sample set, and determine the selected sample as the test set; where, when the first number is the same as the third number, the second The quantity is the same as the fourth quantity; when the first quantity is different from the third quantity, the second quantity is different from the fourth quantity.
  • S100-6-4 Perform the third preprocessing on the training set and the test set to obtain the target training set and the target test set.
  • the one-to-one correspondence between each image in the training set and the camera is determined; according to the determined correspondence, the image corresponding to each camera is obtained from the training set to obtain the corresponding training subset; according to the images included in each training subset , Determine the conversion parameters of the corresponding camera; According to the conversion parameters, perform preset conversion processing on the images corresponding to the corresponding cameras in the training set and the test set to obtain the target training set and the target test set.
  • the conversion parameters of the corresponding camera are determined according to the images included in each training subset, and according to The conversion parameters perform preset conversion processing on the images corresponding to the corresponding cameras in the training set and the test set, so as to convert the pixels of each image to a certain interval to facilitate training.
  • the conversion parameters and the preset conversion processing can be set according to the needs in actual applications; for example, the conversion parameters include pixel average and pixel variance; the preset conversion processing is the pixel value and conversion parameter of each pixel of each image The pixel mean value in the subtraction is obtained, and the subtraction result is divided by the pixel variance to obtain the division result; the image formed by the division result is determined as the target image; thus, the target training set and the target test set are obtained.
  • a training operation is performed based on the target training set and the target test set to obtain the first detection model.
  • each image included in the target training set is input to the twin network for binary classification training to obtain the initial detection model; the target test set is input to the obtained initial detection model to obtain the detection result, and if the detection result meets the preset condition, then Determine the initial detection model as the first detection model; if the detection result does not meet the preset conditions, retrain the model based on the target training set to obtain a new initial detection model, and perform detection on the new initial detection model based on the test set. Until the first detection model is obtained.
  • the detection result represents the probability that each image in the biometric image combination is consistent, and the preset condition is, for example, that the probability is greater than the preset probability.
  • the consistency detection of each image included in the biometric image combination is performed, thereby combining the live detection result to determine whether there is an attack in the live detection risk.
  • the shooting angles of the multiple cameras of the multi-lens camera are different.
  • first preprocessing is performed on the multiple images. Specifically, in step S104, the multiple images are tested for consistency through the pre-trained first detection model, including Steps A2 and A4.
  • Step A2 Perform first preprocessing on multiple images to obtain processed images; specifically, as shown in FIG. 6, in one or more embodiments of this specification, step A2 includes step A2-2 to step A2-10.
  • Step A2-2 Determine the one-to-one correspondence between the multiple images and the multiple cameras in the multi-camera.
  • step A2-2 the implementation process of step A2-2 can be referred to the aforementioned related description, and the repetitive parts will not be repeated here.
  • Step A2-4 Obtain the conversion matrix of the multi-camera and the conversion parameters corresponding to each of the multiple cameras; where the conversion matrix is obtained by calibrating the multiple cameras before training the first detection model; the conversion parameters are When training the first detection model, it is obtained by performing the third preprocessing on the sample set to be trained.
  • the multi-eye camera is recorded as an N-eye camera, where N is greater than or equal to 2; the saved N-1 conversion matrices of the N-eye camera and the conversion parameters corresponding to each of the N cameras are acquired.
  • Step A2-6 Perform spatial alignment processing on multiple images according to the conversion matrix.
  • the reference camera is determined according to the saved camera information of the reference camera; the image corresponding to the reference camera among the multiple images is used as the reference image, and the images other than the reference image among the multiple images are used as the image to be aligned;
  • the conversion matrix between the camera corresponding to each image to be aligned and the reference camera performs a spatial alignment operation on the corresponding image to be aligned to make it spatially aligned with the corresponding reference image.
  • Step A2-8 Perform preset conversion processing on the corresponding spatially aligned image according to the conversion parameters.
  • step A2-8 can refer to the aforementioned related description, and the repetitive parts will not be repeated here.
  • Step A2-10 Determine the image processed by the preset conversion as a processed image.
  • Step A4 Input the processed image to the first detection model to perform consistency detection on the processed image based on the first detection model.
  • the live detection based on the second detection model Since the consistency detection based on the first detection model has better detection capabilities for live attacks in the blind area of the multi-eye camera, the live detection based on the second detection model has better detection capabilities for live attacks in the effective imaging area of the multi-eye camera. Detection capability; therefore, in the embodiments of this specification, according to the first detection result obtained based on the first detection model and the second detection result obtained based on the second detection model, it is determined whether the live detection of the user to be detected is at risk of being attacked.
  • step S106 includes: performing a weighted calculation on the first detection result and the second detection result according to a preset weighting coefficient to obtain the calculation result; if the calculation result is greater than the preset weighting coefficient
  • the first threshold determines that the live detection of the user to be detected is not at risk of being attacked; if the calculation result is not greater than the preset first threshold, it is determined that the live detection of the user to be detected is at risk of being attacked.
  • the weighting coefficient and the first threshold can be set according to needs in actual applications; thus, the first detection result and the second detection result are weighted and fused to determine whether the live detection of the user to be detected is attacked The risk of improving safety.
  • step S106 includes: if the first detection result is greater than a preset second threshold, and the second detection result is greater than a preset third threshold, determining the user’s There is no risk of attack for live detection.
  • the first detection result is greater than a preset second threshold, and if it is not greater than the second threshold, it is determined that the multiple images included in the biometric image combination of the user to be detected are inconsistent, and the live detection of the user to be detected is affected.
  • the risk of attack if it is greater than the second threshold, it is determined that the multiple images included in the combination of biometric images of the user to be detected are consistent, and it is determined whether the second detection result is greater than the preset third threshold, and if it is greater than the third threshold, it is determined
  • the live detection of the user to be detected is not at risk of being attacked. If it is not greater than the third threshold, it is determined that the live detection of the user to be detected is at risk of being attacked.
  • the second detection result is greater than a preset third threshold, if it is not greater than the third threshold, it is determined that the live detection of the user to be detected is at risk of being attacked; if it is greater than the third threshold, it is determined whether the first detection result is It is greater than the preset second threshold. If it is not greater than the second threshold, it is determined that the multiple images included in the biometric image combination of the user to be detected are inconsistent, and the live detection of the user to be detected is at risk of being attacked; if it is greater than the second threshold , It is determined that there is no risk of attack for the live detection of the user to be detected.
  • the multi-lens camera is a binocular camera.
  • the biometric image combination of the user to be detected is detected; thus, the consistency detection and the living body detection are combined to determine Whether the live detection of the user to be detected is at risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-camera camera, but also ensures the detection of live attacks in the blind area of the multi-camera camera. The problem of live attacks in the blind area of the camera, and greatly improves the security.
  • FIGS. 2 to 6 Corresponding to the risk detection methods described in FIGS. 2 to 6 above, based on the same technical concept, one or more embodiments of this specification also provide a risk detection device.
  • Figure 7 is a schematic diagram of the module composition of a risk detection device provided by one or more embodiments of this specification. The device is used to execute the risk detection method described in Figures 2 to 6.
  • the device includes: Module 201, which acquires a biometric image combination of a user to be detected, wherein the biometric image combination includes: multiple images obtained by a single shot of a designated body part of the user to be detected by a multi-lens camera; A training module 202, which detects the consistency of the multiple images through a pre-trained first detection model, to obtain a first detection result; and, through a pre-trained second detection model, performs a consistency check on the multiple images Living body detection obtains a second detection result; a determination module 203, which determines whether there is an attack risk based on the first detection result and the second detection result.
  • Module 201 which acquires a biometric image combination of a user to be detected, wherein the biometric image combination includes: multiple images obtained by a single shot of a designated body part of the user to be detected by a multi-lens camera;
  • a training module 202 which detects the consistency of the multiple images through a pre-trained first detection model, to obtain a first
  • the risk detection device detects a combination of biometric images of the user to be detected based on the pre-trained first detection model and second detection model; thus, the consistency detection is compared with the live detection Combine it to determine whether the live detection of the user to be detected is at risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-camera camera, but also ensures the detection of live attacks in the blind area of the multi-camera camera. Solve the problem of live attacks in the blind area of the multi-eye camera, and greatly improve the security.
  • the first training module 202 performs first preprocessing on the plurality of images to obtain processed images; and, inputs the processed images to the first detection model to be based on the The first detection model performs consistency detection on the processed image.
  • the first training module 202 determines the one-to-one correspondence between the multiple images and the multiple cameras in the multi-camera; and obtains the conversion matrix of the multi-camera and the multi-camera.
  • the sample set to be trained is obtained by performing the third preprocessing; according to the conversion matrix, the multiple images are spatially aligned; according to the conversion parameters, the corresponding spatially aligned images are processed Perform a preset conversion process; determine the image after the preset conversion process as a processed image.
  • the determining module 203 performs a weighted calculation on the first detection result and the second detection result according to a preset weighting coefficient to obtain a calculation result; if the calculation result is greater than the preset first detection result Threshold, it is determined that the live detection of the user to be detected is not at risk of being attacked; if the calculation result is not greater than the preset first threshold, it is determined that the live detection of the user to be detected is at risk of being attacked.
  • the determining module 203 if it is determined that the first detection result is greater than a preset second threshold, and the second detection result is greater than a preset third threshold, determine the living body of the user to be detected There is no risk of being attacked by detection.
  • the device further includes: a second training module; the second training module obtains a combination of biometric images to be trained collected by the multi-lens camera; and combines the biometric images to be trained Performing second preprocessing to obtain a sample set to be trained; training the first detection model based on the sample set.
  • a second training module obtains a combination of biometric images to be trained collected by the multi-lens camera; and combines the biometric images to be trained Performing second preprocessing to obtain a sample set to be trained; training the first detection model based on the sample set.
  • the device further includes: a calibration module; the calibration module performs calibration processing on multiple cameras in the multi-camera to obtain a conversion matrix; the second training module, based on the conversion matrix, Spatial alignment processing is performed on the images in the biometric image combination to be trained; and, the biometric image combination after the spatial alignment processing is used as a sample, and the sample is divided into a positive sample and a negative sample; wherein, Each image in the positive sample is consistent, and each image in the negative sample is not consistent; the positive sample and the negative sample are determined as a sample set to be trained.
  • the second training module divides the sample set into a training set and a test set; wherein the training set and the test set include the same proportion of the positive sample and the negative sample; And, performing a third preprocessing on the training set and the test set to obtain a target training set and a target test set; performing a training operation based on the target training set and the target test set to obtain a first detection model.
  • the second training module determines a one-to-one correspondence between each image in the training set and the camera; and, according to the determined correspondence, acquires the location of each camera from the training set.
  • the risk detection device detects a combination of biometric images of the user to be detected based on the pre-trained first detection model and second detection model; thus, the consistency detection is compared with the live detection Combine it to determine whether the live detection of the user to be detected is at risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-camera camera, but also ensures the detection of live attacks in the blind area of the multi-camera camera. Solve the problem of live attacks in the blind area of the multi-eye camera, and greatly improve the security.
  • one or more embodiments of this specification also provide a risk detection device, which is used to execute the above risk detection method.
  • 8 is a schematic structural diagram of a risk detection device provided by one or more embodiments of this specification.
  • the risk detection equipment may have relatively large differences due to different configurations or performances, and may include one or more processors 301 and a memory 302.
  • the memory 302 may store one or more storage application programs or data. Among them, the memory 302 may be short-term storage or persistent storage.
  • the application program stored in the memory 302 may include one or more modules (not shown in the figure), and each module may include a series of computer-executable instructions in the risk detection device.
  • the processor 301 may be configured to communicate with the memory 302, and execute a series of computer-executable instructions in the memory 302 on the risk detection device.
  • the risk detection device may also include one or more power supplies 303, one or more wired or wireless network interfaces 304, one or more input and output interfaces 305, one or more keyboards 306, and so on.
  • the risk detection device includes a memory and one or more programs, wherein one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each Each module may include a series of computer-executable instructions for the risk detection equipment, and the one or more programs configured to be executed by one or more processors include computer-executable instructions for performing the following:
  • the biometric image combination wherein the biometric image combination includes: multiple images obtained by a single shot of a designated body part of the user to be detected by a multi-lens camera; Performing consistency detection on the multiple images to obtain a first detection result; and, performing live detection on the multiple images through a pre-trained second detection model to obtain a second detection result; according to the first detection result and The second detection result determines whether the live detection of the user to be detected is at risk of being attacked.
  • the user can directly interact with the virtual robot in a group containing virtual robots, and enter the link address of the public opinion to be queried to achieve the purpose of accurate public opinion query; without the user having to extract the query to be queried in advance
  • the keywords of public opinion and there is no need for the first client to list and display multiple public opinions for users to choose, which not only improves query efficiency and user experience, but also the link address is more certain, ensuring the accuracy of query results.
  • the checking the consistency of the plurality of images through the pre-trained first detection model includes: performing first preprocessing on the plurality of images to obtain Processed image; input the processed image to the first detection model to perform consistency detection on the processed image based on the first detection model.
  • the performing the first preprocessing on the multiple images to obtain the processed images includes: determining that the multiple images and multiple of the multi-cameras One-to-one correspondence between cameras; acquiring the conversion matrix of the multi-lens camera and the conversion parameter corresponding to each camera in the plurality of cameras; wherein, the conversion matrix is before the training of the first detection model, the A plurality of cameras are obtained by calibration processing; the conversion parameter is obtained by performing a third preprocessing on the sample set to be trained when the first detection model is trained; and the plurality of images are spatially processed according to the conversion matrix Alignment processing; performing preset conversion processing on the corresponding spatially aligned image according to the conversion parameter; determining the image after the preset conversion processing as a processed image.
  • the determining whether the live detection of the user to be detected is at risk of being attacked according to the first detection result and the second detection result includes: Set the weighting coefficient to perform weighting calculation on the first detection result and the second detection result to obtain the calculation result; if the calculation result is greater than the preset first threshold, determine the live detection of the user to be detected There is no risk of being attacked; if the calculation result is not greater than the preset first threshold, it is determined that the live detection of the user to be detected is at risk of being attacked.
  • the determining whether the live detection of the user to be detected is at risk of being attacked according to the first detection result and the second detection result includes: If the first detection result is greater than the preset second threshold, and the second detection result is greater than the preset third threshold, it is determined that there is no risk of attack in the live detection of the user to be detected.
  • the method further includes: obtaining the combination of biometric images to be trained collected by the multi-camera; The trained biometric image combination is subjected to second preprocessing to obtain a sample set to be trained; and the first detection model is trained based on the sample set.
  • the method further includes: calibrating multiple cameras in the multi-camera, Obtain a conversion matrix; the second preprocessing is performed on the combination of biometric images to be trained to obtain a sample set to be trained, including: according to the conversion matrix, the images in the combination of biometric images to be trained Perform spatial alignment processing; take the biometric image combination after the spatial alignment processing as a sample, and divide the sample into a positive sample and a negative sample; wherein each image in the positive sample is consistent, and the negative sample The images of are inconsistent; the positive sample and the negative sample are determined as the sample set to be trained.
  • the training of the first detection model based on the sample set includes: dividing the sample set into a training set and a test set; wherein the training set and The test set includes the same ratio of the positive sample and the negative sample; the third preprocessing is performed on the training set and the test set to obtain a target training set and a target test set; based on the target training set Perform a training operation with the target test set to obtain a first detection model.
  • the third preprocessing of the training set and the test set to obtain the target training set and the target test set includes: determining each image in the training set One-to-one correspondence with the camera; according to the determined correspondence, the image corresponding to each camera is obtained from the training set to obtain the corresponding training subset; according to the training subset included in each Image, determine the conversion parameters of the corresponding camera; according to the conversion parameters, perform preset conversion processing on the images corresponding to the corresponding cameras in the training set and the test set to obtain a target training set and a target test set.
  • the risk detection device detects a combination of biometric images of the user to be detected based on the pre-trained first detection model and second detection model; thus, the consistency detection is compared with the live detection Combine it to determine whether the live detection of the user to be detected is at risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-camera camera, but also ensures the detection of live attacks in the blind area of the multi-camera camera. Solve the problem of live attacks in the blind area of the multi-eye camera, and greatly improve the security.
  • one or more embodiments of this specification also provide a storage medium for storing computer-executable instructions, a specific
  • the storage medium may be a U disk, an optical disk, a hard disk, etc.
  • the computer-executable instructions stored in the storage medium can realize the following process when executed by the processor: obtaining a combination of biometric images of the user to be detected, where: The biometric image combination includes: multiple images obtained by a single shot of the designated body part of the user to be detected by a multi-lens camera; and the consistency of the multiple images is performed through a pre-trained first detection model Detection to obtain a first detection result; and, by using a pre-trained second detection model, perform a living detection on the plurality of images to obtain a second detection result; according to the first detection result and the second detection result, It is determined whether the live detection of the user to be detected is at risk of being attacked.
  • the biometric image combination of the user to be detected is detected; thus, the consistency detection and the living body detection are combined to determine Whether the live detection of the user to be detected is at risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-camera camera, but also ensures the detection of live attacks in the blind area of the multi-camera camera. The problem of live attacks in the blind area of the camera, and greatly improves the security.
  • the first detection model that is pre-trained to perform consistency detection on the multiple images includes: Perform a first preprocessing to obtain a processed image; input the processed image to the first detection model to perform consistency detection on the processed image based on the first detection model.
  • the performing the first preprocessing on the plurality of images to obtain the processed images includes: determining the relationship between the plurality of images and the One-to-one correspondence of the multiple cameras in the multi-camera; acquiring the conversion matrix of the multi-camera and the conversion parameters corresponding to each of the multiple cameras; wherein the conversion matrix is for training the first Before detecting the model, it is obtained by calibrating the multiple cameras; the conversion parameter is obtained by performing the third preprocessing on the sample set to be trained when training the first detection model; according to the conversion matrix, Performing spatial alignment processing on the plurality of images; performing preset conversion processing on the corresponding spatial alignment processed image according to the conversion parameter; determining the image after the preset conversion processing as a processed image.
  • the first detection result and the second detection result are used to determine whether the live detection of the user to be detected is attacked
  • the risk includes: performing a weighted calculation on the first detection result and the second detection result according to a preset weighting coefficient to obtain a calculation result; if the calculation result is greater than a preset first threshold, determining all The live detection of the user to be detected is not at risk of being attacked; if the calculation result is not greater than the preset first threshold, it is determined that the live detection of the user to be detected is at risk of being attacked.
  • the first detection result and the second detection result are used to determine whether the live detection of the user to be detected is attacked
  • the risk includes: if the first detection result is greater than a preset second threshold, and the second detection result is greater than a preset third threshold, determining that the live detection of the user to be detected is not attacked risk.
  • the method further includes: obtaining biometrics to be trained collected by the multi-lens camera Image combination; performing a second preprocessing on the biometric image combination to be trained to obtain a sample set to be trained; training the first detection model based on the sample set.
  • the method further includes: A plurality of cameras are calibrated to obtain a conversion matrix; the second preprocessing of the combination of biometric images to be trained to obtain a sample set to be trained includes: according to the conversion matrix, performing the second preprocessing on the to-be-trained The images in the biometric image combination are subjected to spatial alignment processing; the biometric image combination after the spatial alignment processing is used as a sample, and the sample is divided into a positive sample and a negative sample; wherein each image in the positive sample Consistent, the images in the negative sample are not consistent; the positive sample and the negative sample are determined as the sample set to be trained.
  • the training of the first detection model based on the sample set includes: dividing the sample set into a training set and a test set; Wherein, the training set and the test set include the same proportion of the positive sample and the negative sample; the third preprocessing is performed on the training set and the test set to obtain a target training set and a target test set ; Perform a training operation based on the target training set and the target test set to obtain a first detection model.
  • the third preprocessing is performed on the training set and the test set to obtain the target training set and the target test set, including: determining The one-to-one correspondence between each image in the training set and the camera; according to the determined correspondence, the image corresponding to each camera is obtained from the training set to obtain a corresponding training subset; For the images included in the training subset, the conversion parameters of the corresponding cameras are determined; according to the conversion parameters, preset conversion processing is performed on the images corresponding to the corresponding cameras in the training set and the test set to obtain the target training set and the target Test set.
  • the computer-executable instructions stored in the storage medium provided by one or more embodiments of the present specification are executed by the processor, they are detected based on the pre-trained first detection model and the second detection model, based on the combination of biometric images of the user to be detected; Therefore, the consistency detection and the live detection are combined to determine whether the live detection of the user to be detected is at risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-eye camera, but also ensures the detection of multiple
  • the detection of live attacks in the blind area of the eye camera not only solves the problem of live attacks in the blind area of the multi-eye camera, but also greatly improves the security.
  • the improvement of a technology can be clearly distinguished between hardware improvements (for example, improvements in circuit structures such as diodes, transistors, switches, etc.) and software improvements (improvements in method flow).
  • hardware improvements for example, improvements in circuit structures such as diodes, transistors, switches, etc.
  • software improvements improvements in method flow
  • the improvement of many methods and processes of today can be regarded as a direct improvement of the hardware circuit structure.
  • Designers almost always get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by the hardware entity module.
  • a programmable logic device Programmable Logic Device, PLD
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • ABEL Advanced Boolean Expression Language
  • AHDL Altera Hardware Description Language
  • HDCal JHDL
  • Lava Lava
  • Lola MyHDL
  • PALASM RHDL
  • VHDL Very-High-Speed Integrated Circuit Hardware Description Language
  • Verilog Verilog
  • the controller can be implemented in any suitable manner.
  • the controller can take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (such as software or firmware) executable by the (micro)processor. , Logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers. Examples of controllers include but are not limited to the following microcontrollers: ARC625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as part of the memory control logic.
  • controller in addition to implementing the controller in a purely computer-readable program code manner, it is entirely possible to program the method steps to make the controller use logic gates, switches, application specific integrated circuits, programmable logic controllers and embedded
  • the same function can be realized in the form of a microcontroller, etc. Therefore, such a controller can be regarded as a hardware component, and the devices included in it for realizing various functions can also be regarded as a structure within the hardware component. Or even, the device for realizing various functions can be regarded as both a software module for realizing the method and a structure within a hardware component.
  • a typical implementation device is a computer.
  • the computer can be, for example, a personal computer, a laptop computer, a cell phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or Any combination of these devices.
  • one or more embodiments of this specification can be provided as a method, a system, or a computer program product. Therefore, one or more embodiments of this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this specification can take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
  • One or more embodiments of this specification may be described in the general context of computer-executable instructions executed by a computer, such as program modules.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • One or more embodiments of this specification can also be practiced in distributed computing environments. In these distributed computing environments, tasks are performed by remote processing devices connected through a communication network. In a distributed computing environment, program modules can be located in local and remote computer storage media including storage devices.

Abstract

A risk detection method, apparatus and device. The method comprises: obtaining a biometric feature image combination of a user to be subjected to detection, wherein the biometric feature image combination comprises: a plurality of images obtained by using a multi-ocular camera to photograph, a single time, a specified body part of the user to be subjected to detection (S102); performing consistency detection on the plurality of acquired images by means of a pre-trained first detection model, so as to obtain a first detection result, and performing living body detection on the plurality of acquired images by means of a pre-trained second detection model, so as to obtain a second detection result (S104); and according to the first detection result and the second detection result, determining whether the living body detection of the user to be subjected to detection has a risk of attack (S106).

Description

风险检测方法、装置及设备Risk detection method, device and equipment 技术领域Technical field
本文件涉及数据处理技术领域,尤其涉及一种风险检测方法、装置及设备。This document relates to the field of data processing technology, in particular to a risk detection method, device and equipment.
背景技术Background technique
随着人们对安全性的要求越来越高,双目摄像头被广泛的应用于各种安全场景,如门禁、安防等,以通过对双目摄像头单次拍摄所得的两张图像进行检测,来避免活体攻击。一般的,双目摄像头的有效成像区域为其两个摄像头成像区域的交集,有效成像区域外的区域被称为盲区。并且只有一个摄像头能够采集到该盲区的图像,另一个摄像头则无法采集到盲区的图像。由于双目摄像头的盲区具有该特点,因此,仍然存在活体攻击的风险。As people’s requirements for safety become higher and higher, binocular cameras are widely used in various security scenarios, such as access control, security, etc., to detect two images obtained by a single shot of the binocular camera. Avoid live attacks. Generally, the effective imaging area of a binocular camera is the intersection of the imaging areas of the two cameras, and the area outside the effective imaging area is called a blind zone. And only one camera can collect the image of the blind area, and the other camera cannot collect the image of the blind area. Because the blind zone of the binocular camera has this feature, there is still a risk of live attack.
发明内容Summary of the invention
本说明书一个或多个实施例的目的是提供一种风险检测方法、装置及设备,通过将一致性检测与活体检测相结合,从而确定待检测用户的活体检测是否存在受攻击的风险,不仅解决了多目摄像头盲区的活体攻击问题,而且极大的提升了安全性。The purpose of one or more embodiments of this specification is to provide a risk detection method, device, and equipment, which combine consistency detection and liveness detection to determine whether the liveness detection of the user to be detected is at risk of being attacked, which not only solves the problem The problem of live attacks in the blind area of multi-camera cameras is solved, and the security is greatly improved.
为解决上述技术问题,本说明书一个或多个实施例是这样实现的。In order to solve the above technical problems, one or more embodiments of this specification are implemented in this way.
本说明书一个或多个实施例提供了一种风险检测,包括:获取待检测用户的生物特征图像组合,其中,所述生物特征图像组合包括:由多目摄像头对所述待检测用户的指定身体部位进行单次拍摄所得的多个图像;通过预先训练的第一检测模型,对所述多个图像进行一致性检测,得到第一检测结果;以及,通过预先训练的第二检测模型,对所述多个图像进行活体检测,得到第二检测结果;根据所述第一检测结果和所述第二检测结果,确定所述待检测用户的活体检测是否存在受攻击的风险。One or more embodiments of this specification provide a risk detection, including: acquiring a combination of biometric images of a user to be detected, wherein the combination of biometric images includes: a multi-lens camera on a designated body of the user to be detected Multiple images obtained from a single shot of the part; the first detection model is used to perform consistency detection on the multiple images to obtain the first detection result; and the second detection model is pre-trained to obtain the first detection result; Perform live detection on the multiple images to obtain a second detection result; according to the first detection result and the second detection result, determine whether the live detection of the user to be detected is at risk of being attacked.
本说明书一个或多个实施例提供了一种风险检测装置,包括:获取模块,其获取待检测的用户的生物特征图像组合,其中,所述生物特征图像组合包括:由多目摄像头对所述待检测用户的指定身体部位进行单次拍摄所得的多个图像;第一训练模块,其通过预先训练的第一检测模型,对所述多个图像进行一致性检测,得到第一检测结果;以及,通过预先训练的第二检测模型,对所述多个图像进行活体检测,得到第二检测结果; 确定模块,其根据所述第一检测结果和所述第二检测结果,确定是否存在攻击风险。One or more embodiments of this specification provide a risk detection device, including: an acquisition module that acquires a combination of biometric images of a user to be detected, wherein the combination of biometric images includes: A plurality of images obtained by a single shot of a designated body part of the user to be detected; a first training module, which uses a pre-trained first detection model to perform consistency detection on the plurality of images to obtain a first detection result; and , Performing live detection on the multiple images through a pre-trained second detection model to obtain a second detection result; a determining module, which determines whether there is a risk of attack based on the first detection result and the second detection result .
本说明书一个或多个实施例提供了一种风险检测设备,包括:处理器;以及,被安排成存储计算机可执行指令的存储器,所述计算机可执行指令在被执行时使所述处理器:获取待检测用户的生物特征图像组合,其中,所述生物特征图像组合包括:由多目摄像头对所述待检测用户的指定身体部位进行单次拍摄所得的多个图像;通过预先训练的第一检测模型,对所述多个图像进行一致性检测,得到第一检测结果;以及,通过预先训练的第二检测模型,对所述多个图像进行活体检测,得到第二检测结果;根据所述第一检测结果和所述第二检测结果,确定所述待检测用户的活体检测是否存在受攻击的风险。One or more embodiments of this specification provide a risk detection device, including: a processor; and, a memory arranged to store computer-executable instructions, which when executed, cause the processor to: Acquire a biometric image combination of the user to be detected, where the biometric image combination includes: multiple images obtained by a single shot of a designated body part of the user to be detected by a multi-lens camera; A detection model, which performs consistency detection on the multiple images to obtain a first detection result; and, through a pre-trained second detection model, performs a living detection on the multiple images to obtain a second detection result; The first detection result and the second detection result determine whether the live detection of the user to be detected is at risk of being attacked.
本说明书一个或多个实施例提供了一种存储介质,用于存储计算机可执行指令,所述计算机可执行指令在被执行时实现以下流程:获取待检测用户的生物特征图像组合,其中,所述生物特征图像组合包括:由多目摄像头对所述待检测用户的指定身体部位进行单次拍摄所得的多个图像;通过预先训练的第一检测模型,对所述多个图像进行一致性检测,得到第一检测结果;以及,通过预先训练的第二检测模型,对所述多个图像进行活体检测,得到第二检测结果;根据所述第一检测结果和所述第二检测结果,确定所述待检测用户的活体检测是否存在受攻击的风险。One or more embodiments of this specification provide a storage medium for storing computer-executable instructions that, when executed, realize the following process: acquiring a combination of biometric images of a user to be detected, where all The biometric image combination includes: multiple images obtained by a single shot of the designated body part of the user to be detected by a multi-lens camera; and the consistency of the multiple images is detected through a pre-trained first detection model , Obtain a first detection result; and, perform a living detection on the plurality of images through a pre-trained second detection model to obtain a second detection result; determine according to the first detection result and the second detection result Whether the live detection of the user to be detected is at risk of being attacked.
本说明书一个实施例基于预先训练的第一检测模型和第二检测模型,对待检测用户的生物特征图像组合进行检测;由此,将一致性检测与活体检测相结合,来确定待检测用户的活体检测是否存在受攻击的风险;既确保了对多目摄像头的有效成像区域的活体攻击进行检测、又确保了对多目摄像头的盲区的活体攻击进行检测,不仅解决了多目摄像头盲区的活体攻击问题,而且极大的提升了安全性。An embodiment of this specification is based on the pre-trained first detection model and the second detection model to detect the biometric image combination of the user to be detected; thus, the consistency detection and the living body detection are combined to determine the living body of the user to be detected Detect whether there is a risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-eye camera, but also ensures the detection of live attacks in the blind area of the multi-eye camera, which not only solves the live attack in the blind area of the multi-eye camera Problems, and greatly improved security.
附图说明Description of the drawings
为了更清楚地说明本说明书一个或多个实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in one or more embodiments of this specification, the following will briefly introduce the drawings needed in the description of the embodiments. Obviously, the drawings in the following description are only in this specification. For some of the described embodiments, those of ordinary skill in the art can obtain other drawings based on these drawings without creative work.
图1为本说明书一个或多个实施例提供的一种风险检测方法的场景示意图。FIG. 1 is a schematic diagram of a scenario of a risk detection method provided by one or more embodiments of this specification.
图2为本说明书一个或多个实施例提供的一种风险检测方法的第一种流程示意图。FIG. 2 is a schematic diagram of the first flow of a risk detection method provided by one or more embodiments of this specification.
图3为本说明书一个或多个实施例提供的一种风险检测方法的第二种流程示意图。FIG. 3 is a schematic diagram of the second flow of a risk detection method provided by one or more embodiments of this specification.
图4为本说明书一个或多个实施例提供的步骤S100-4的细化图。FIG. 4 is a detailed diagram of step S100-4 provided by one or more embodiments of this specification.
图5为本说明书一个或多个实施例提供的步骤S100-6的细化图。FIG. 5 is a detailed diagram of step S100-6 provided by one or more embodiments of this specification.
图6为本说明书一个或多个实施例提供的步骤A2的细化图。Fig. 6 is a detailed diagram of step A2 provided in one or more embodiments of this specification.
图7为本说明书一个或多个实施例提供的一种风险检测装置的模块组成示意图。FIG. 7 is a schematic diagram of the module composition of a risk detection device provided by one or more embodiments of this specification.
图8为本说明书一个或多个实施例提供的一种风险检测设备的结构示意图。Fig. 8 is a schematic structural diagram of a risk detection device provided by one or more embodiments of this specification.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本说明书一个或多个实施例中的技术方案,下面将结合本说明书一个或多个实施例中的附图,对本说明书一个或多个实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本说明书的一部分实施例,而不是全部的实施例。基于本说明书一个或多个实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本文件的保护范围。In order to enable those skilled in the art to better understand the technical solutions in one or more embodiments of this specification, the following will be combined with the drawings in one or more embodiments of this specification to compare the technical solutions in one or more embodiments of this specification. The technical solution is described clearly and completely. Obviously, the described embodiments are only a part of the embodiments in this specification, rather than all the embodiments. Based on one or more embodiments of this specification, all other embodiments obtained by a person of ordinary skill in the art without creative work shall fall within the protection scope of this document.
人脸识别对于人们来说并不陌生,人脸识别过程中的风险主要在于活体攻击,即攻击者使用照片、视频等仿冒真人进行刷脸,以企图通过人脸识别。采用双目摄像头采集人脸图像,并对其单次采集所得的彩色图像(RGB图像)和近红外图像(IR图像)分别进行活体检测,是当前主流的防攻击措施。然而,由于双目摄像头存在拍摄盲区,并且该盲区具有一个摄像头能够采集到其图像,另一个摄像头无法采集到其图像的特点,因此,成为了攻击者进行活体攻击的突破口,从而使得双目摄像头的活体检测降级为单目摄像头的活体检测,攻击的成功率大大提升。对此,本说明书一个或多个实施例提供了一种风险检测方法、装置及设备,可对待检测用户的活体检测是否存在受攻击的风险进行检测,具体的,预先训练第一检测模型和第二检测模型,并通过第一检测模型对待检测用户的生物特征图像组合进行一致性检测,得到第一检测结果;以及,采用第二检测模型对待检测用户的生物特征图像组合进行活体检测,得到第二检测结果;从而根据第一检测结果和第二检测结果确定待检测用户的活体检测是否存在受攻击的风险。其中,生物特征图像组合包括:由多目摄像头对待检测用户的指定身体部位进行单次拍摄所得的多个图像;指定身体部位如虹膜、人脸等。由此,将一致性检测与活体检测相结合,解决了多目摄像头盲区的活体攻击问题,极大的提升了安全性。Face recognition is not unfamiliar to people. The risk in the process of face recognition lies mainly in live attacks, that is, an attacker uses photos, videos and other imitation real people to brush their faces in an attempt to pass face recognition. The use of binocular cameras to collect face images, and the color images (RGB images) and near-infrared images (IR images) obtained from a single collection for live detection are the current mainstream anti-attack measures. However, because the binocular camera has a blind spot for shooting, and the blind spot has the characteristics that one camera can collect its image, and the other camera cannot collect its image, it has become a breakthrough for the attacker to carry out a live attack, thus making the binocular camera The live body detection is downgraded to the live body detection of the monocular camera, and the success rate of the attack is greatly improved. In this regard, one or more embodiments of this specification provide a risk detection method, device, and equipment, which can detect whether there is a risk of being attacked in the live detection of the user to be detected. Specifically, the first detection model and the first detection model are trained in advance. The second detection model, and the first detection model is used to perform consistency detection on the biometric image combination of the user to be detected to obtain the first detection result; and the second detection model is used to perform the living body detection on the biometric image combination of the user to be detected to obtain the first detection model. Two detection results; thereby determining whether the live detection of the user to be detected is at risk of being attacked according to the first detection result and the second detection result. Among them, the biometric image combination includes: multiple images obtained by a single shot of a designated body part of the user to be detected by a multi-lens camera; designated body parts such as iris, human face, and so on. As a result, the consistency detection and the live detection are combined to solve the problem of live attacks in the blind area of the multi-eye camera, and greatly improve the security.
图1为本说明书一个或多个实施例提供的风险检测方法的应用场景示意图,如图1 所示,该场景包括:多目摄像头和风险检测装置,其中,多目摄像头包括多个摄像头;风险检测装置可以为独立于多目摄像头的装置,还可以为部署于多目摄像头中的装置。Figure 1 is a schematic diagram of an application scenario of the risk detection method provided by one or more embodiments of this specification. As shown in Figure 1, the scenario includes: a multi-camera and a risk detection device, where the multi-camera includes multiple cameras; risk The detection device may be a device independent of the multi-lens camera, or may be a device deployed in the multi-lens camera.
具体的,多目摄像头对待检测用户的指定身体部分进行单次拍摄,得到包括多个图像的生物特征图像组合;风险检测装置获取待检测用户的生物特征图像组合,通过预先训练第一检测模型,对该多个图像进行一致性检测,得到第一检测结果;以及,通过预先训练的第二检测模型,对该多个图像进行活体检测,得到第二检测结果;根据得到的第一检测结果和第二检测结果,确定待检测用户的活体检测是否存在受攻击的风险。由此,将一致性检测与活体检测相结合,解决了多目摄像头盲区的活体攻击问题,极大的提升了安全性。Specifically, the multi-lens camera takes a single shot of the designated body part of the user to be detected to obtain a biometric image combination including multiple images; the risk detection device obtains the biometric image combination of the user to be detected, and trains the first detection model in advance, Perform consistency detection on the multiple images to obtain a first detection result; and, perform live detection on the multiple images through a pre-trained second detection model to obtain a second detection result; according to the obtained first detection result and The second detection result determines whether the live detection of the user to be detected is at risk of being attacked. As a result, the consistency detection and the live detection are combined to solve the problem of live attacks in the blind area of the multi-eye camera, and greatly improve the security.
基于上述应用场景架构,本说明书一个或多个实施例提供一种风险检测方法;图2为本说明书一个或多个实施例提供的一种风险检测方法的流程示意图,图2中的方法能够由图1中的风险检测装置执行,如图2所示,该方法包括步骤S102-S106。Based on the foregoing application scenario architecture, one or more embodiments of this specification provide a risk detection method; FIG. 2 is a schematic flowchart of a risk detection method provided by one or more embodiments of this specification, and the method in FIG. 2 can be modified by The risk detection device in FIG. 1 is executed. As shown in FIG. 2, the method includes steps S102-S106.
步骤S102,获取待检测的用户的生物特征图像组合,其中,生物特征图像组合包括:由多目摄像头对待检测用户的指定身体部位进行单次拍摄所得的多个图像;其中,指定身体部位可以为虹膜、人脸等。Step S102: Acquire a biometric image combination of the user to be detected, where the biometric image combination includes: multiple images obtained by a single shot of a designated body part of the user to be detected by a multi-camera; wherein the designated body part may be Iris, human face, etc.
步骤S104,通过预先训练的第一检测模型,对获取的多个图像进行一致性检测,得到第一检测结果;以及,通过预先训练的第二检测模型,对获取的多个图像进行活体检测,得到第二检测结果。Step S104: Perform consistency detection on the acquired multiple images through the pre-trained first detection model to obtain a first detection result; and, through the pre-trained second detection model, perform live detection on the multiple acquired images, Obtain the second test result.
其中,第一检测模型用于对生物特征图像组合包括的多个图像进行一致性检测;由于盲区攻击时,多目摄像头中的多个摄像头所拍摄的多个图像之间的差异较大,因此,通过第一检测模型对生物特征图像组合包括的多个图像进行一致性检测,能够有效的拦截盲区攻击;其中,第一检测模型的训练过程在后文中进行详述。第二检测模型用于对生物特征图像组合包括的多个图像进行活体检测,以拦截有效成像区域的活体攻击;第二检测模型与现有的活体检测模型可以相同也可以不同,由于进行活体检测的检测模型的训练过程对于本领域技术人员来说是熟知的技术手段,因此,第二检测模型的训练过程本说明书实施例中不再详述。Among them, the first detection model is used to detect the consistency of multiple images included in the combination of biometric images; due to the blind spot attack, the difference between multiple images taken by multiple cameras in the multi-camera is relatively large, so The first detection model is used to perform consistency detection on the multiple images included in the biometric image combination, which can effectively intercept the blind spot attack; wherein the training process of the first detection model is described in detail later. The second detection model is used to perform live detection on multiple images included in the combination of biometric images to intercept live attacks in the effective imaging area; the second detection model can be the same or different from the existing live detection model, due to live detection The training process of the detection model is a well-known technical means to those skilled in the art. Therefore, the training process of the second detection model will not be described in detail in the embodiment of this specification.
步骤S106,根据第一检测结果和第二检测结果,确定待检测用户的活体检测是否存在受攻击的风险。Step S106: According to the first detection result and the second detection result, it is determined whether the live detection of the user to be detected is at risk of being attacked.
本说明书一个或多个实施例中,基于预先训练的第一检测模型和第二检测模型, 对待检测用户的生物特征图像组合进行检测;由此,将一致性检测与活体检测相结合,来确定待检测用户的活体检测是否存在受攻击的风险;既确保了对多目摄像头的有效成像区域的活体攻击进行检测、又确保了对多目摄像头的盲区的活体攻击进行检测,不仅解决了多目摄像头盲区的活体攻击问题,而且极大的提升了安全性。In one or more embodiments of this specification, based on the pre-trained first detection model and the second detection model, the biometric image combination of the user to be detected is detected; thus, the consistency detection and the living body detection are combined to determine Whether the live detection of the user to be detected is at risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-camera camera, but also ensures the detection of live attacks in the blind area of the multi-camera camera. The problem of live attacks in the blind area of the camera, and greatly improves the security.
为了实现对生物特征图像组合包括的多个图像进行一致性检测,本说明书一个或多个实施例中,如图3所示,步骤S102之前还包括S100-2~S100-6。In order to realize the consistency detection of the multiple images included in the biometric image combination, in one or more embodiments of this specification, as shown in FIG. 3, before step S102, S100-2 to S100-6 are further included.
步骤S100-2,获取多目摄像头采集的待训练的生物特征图像组合。Step S100-2: Obtain the biometric image combination to be trained collected by the multi-lens camera.
步骤S100-4,对待训练的生物特征图像组合进行第二预处理,得到待训练的样本集。In step S100-4, a second preprocessing is performed on the biometric image combination to be trained to obtain a sample set to be trained.
步骤S100-6,基于样本集训练第一检测模型。In step S100-6, the first detection model is trained based on the sample set.
由于多目摄像头的多个摄像头的拍摄视角不同,因此多目摄像头单次采集所得的多个图像之间也存在差异,为了避免因该差异而造成所训练的第一检测模型的检测性能较差,本说明书一个或多个实施例中,步骤S100-4之前还包括步骤S100-3。Since the shooting angles of the multiple cameras of the multi-camera camera are different, there are also differences between the multiple images acquired by the multi-camera camera at a time. In order to avoid this difference, the detection performance of the trained first detection model is poor. In one or more embodiments of this specification, step S100-3 is further included before step S100-4.
步骤S100-3,对多目摄像头中的多个摄像头进行标定处理,得到转换矩阵;Step S100-3: Perform calibration processing on multiple cameras in the multi-camera to obtain a conversion matrix;
具体的,从多目摄像头包括的多个摄像头中随机选择一个摄像头作为基准摄像头,并将除基准摄像头外的摄像头作为待标定摄像头;利用黑白方格标定板,分别求解基准摄像头与每个待标定摄像头之间的内参矩阵和外参矩阵;根据外参矩阵,将相应的基准摄像头和待标定摄像头单次采集所得的两个图像,从世界坐标系转换到相机坐标系;根据对应的内参矩阵,将该两个图像由相机坐标系转换到像素坐标系;确定该两个图像中同一像素点在世界坐标系中的第一坐标和在像素坐标系中的第二坐标;以及,确定由第一坐标转换为第二坐标的转换矩阵,并将确定的转换矩阵作为基准摄像头和相应待标定摄像头之间的转换矩阵,将得到的各个转换矩阵作为多目摄像头的转换矩阵。其中,求解基准摄像头和待标定摄像头之间的内参矩阵和外参矩阵,进而根据内参矩阵和外参矩阵确定该两个摄像头之间的转换矩阵的过程,可参见现有的双目摄像头的标定过程,对此本说明书实施例中不再进行进一步详述。Specifically, a camera is randomly selected from the multiple cameras included in the multi-camera as the reference camera, and the cameras other than the reference camera are used as the camera to be calibrated; the black and white grid calibration board is used to solve the reference camera and each camera to be calibrated. The internal parameter matrix and the external parameter matrix between the cameras; according to the external parameter matrix, the two images collected by the corresponding reference camera and the camera to be calibrated are converted from the world coordinate system to the camera coordinate system; according to the corresponding internal parameter matrix, Convert the two images from the camera coordinate system to the pixel coordinate system; determine the first coordinate of the same pixel in the two images in the world coordinate system and the second coordinate in the pixel coordinate system; The coordinates are converted into a conversion matrix of the second coordinate, and the determined conversion matrix is used as the conversion matrix between the reference camera and the corresponding camera to be calibrated, and each obtained conversion matrix is used as the conversion matrix of the multi-lens camera. Among them, the process of solving the internal parameter matrix and the external parameter matrix between the reference camera and the camera to be calibrated, and then determining the conversion matrix between the two cameras according to the internal parameter matrix and the external parameter matrix, can refer to the calibration of the existing binocular camera The process will not be described in further detail in the embodiments of this specification.
进一步的,选择基准摄像头之后,还包括:记录基准摄像头的摄像头信息;以及,得到多目摄像头的转换矩阵之后,还包括:保存该转换矩阵;以在对待检测用户的生物特征图像组合进行一致性检测时,根据记录的基准摄像头的摄像头信息确定待检测用户的生物特征图像组合中的基准图像,以及根据保存的转换矩阵对待检测用户的生物特征 图像组合中的待对齐图像进行空间对齐处理。Further, after selecting the reference camera, it further includes: recording the camera information of the reference camera; and, after obtaining the conversion matrix of the multi-view camera, it further includes: saving the conversion matrix; in order to achieve consistency in the biometric image combination of the user to be detected During detection, determine the reference image in the biometric image combination of the user to be detected according to the recorded camera information of the reference camera, and perform spatial alignment processing on the image to be aligned in the biometric image combination of the user to be detected according to the saved conversion matrix.
在得到多目摄像头的转换矩阵之后,即可根据该转换矩阵对待训练的生物特征图像组合中的图像进行第二预处理,以避免因图像之间存在差异而导致所训练的第一检测模型的检测性能较差。具体的,如图4所示,步骤S100-4包括步骤S100-4-2至步骤S100-4-6。After the conversion matrix of the multi-lens camera is obtained, the second preprocessing can be performed on the images in the biometric image combination to be trained according to the conversion matrix, so as to avoid the difference between the images causing the training of the first detection model. Poor detection performance. Specifically, as shown in FIG. 4, step S100-4 includes step S100-4-2 to step S100-4-6.
步骤S100-4-2,根据转换矩阵,对待训练的生物特征图像组合中的图像进行空间对齐处理。Step S100-4-2, according to the conversion matrix, perform spatial alignment processing on the images in the biometric image combination to be trained.
具体的,对于每个待训练的生物特征图像组合,确定其包括的多个图像与多个摄像头的一一对应关系;将多个图像中与基准摄像头对应的图像作为基准图像,并将多个图像中除基准图像外的图像作为待对齐图像;根据每个待对齐图像所对应的摄像头与基准摄像头之间的转换矩阵,对相应的待对齐图像进行空间对齐操作,使其与基准图像空间对齐。Specifically, for each combination of biometric images to be trained, determine the one-to-one correspondence between the multiple images it includes and multiple cameras; take the image corresponding to the reference camera among the multiple images as the reference image, and combine the multiple The images other than the reference image in the image are regarded as the image to be aligned; according to the conversion matrix between the camera corresponding to each image to be aligned and the reference camera, the corresponding image to be aligned is spatially aligned to align it with the reference image. .
可选地,每个图像的图像名称中包括对应摄像头的摄像头标识;对应的,确定多个图像与多个摄像头的一一对应关系,包括:根据各图像的图像名称所包括的摄像头标识,确定各图像与摄像头的一一对应关系;或者,多目摄像头所包括的多个摄像头分别预先与风险检测装置建立传输通道,各摄像头通过对应的传输通道将拍摄的图像发送给风险检测装置;对应的,确定多个图像与多个摄像头的一一对应关系,包括:根据接收各图像的传输通道的通道标识,从通道标识与摄像头标识的对应关系中获取对定的摄像头标识;将获取的摄像头标识所对应的摄像头确定为相应图像对应的摄像头。Optionally, the image name of each image includes the camera identification of the corresponding camera; correspondingly, determining the one-to-one correspondence between multiple images and multiple cameras includes: determining according to the camera identification included in the image name of each image The one-to-one correspondence between each image and the camera; or, the multiple cameras included in the multi-lens camera respectively establish a transmission channel with the risk detection device in advance, and each camera sends the captured image to the risk detection device through the corresponding transmission channel; corresponding , Determining the one-to-one correspondence between multiple images and multiple cameras includes: obtaining the corresponding camera identification from the corresponding relationship between the channel identification and the camera identification according to the channel identification of the transmission channel through which each image is received; the acquired camera identification The corresponding camera is determined as the camera corresponding to the corresponding image.
步骤S100-4-4,将空间对齐处理后的生物特征图像组合作为样本,并将样本划分为正样本和负样本;其中,正样本中的各图像一致,负样本中的各图像不一致。In step S100-4-4, the biometric image combination after the spatial alignment processing is used as a sample, and the sample is divided into a positive sample and a negative sample; wherein, the images in the positive sample are consistent, and the images in the negative sample are inconsistent.
通常的,空间对齐处理后的生物特征图像组合中的各图像一致,但是由于多目摄像头采集生物特征图像组合时,存在因外力作用而抖动、部分遮挡等现象,使得空间对齐处理后的生物特征图像组合中的各图像可能存在略微差异;基于此,步骤S100-4-4中将样本划分为正样本和负样本,包括:确定各样本包括的多个图像是否一致,若一致,则将相应的样本确定为正样本;若不一致,则将相应的样本确定为负样本。进一步的,考虑到多目摄像头采集生物特征图像组合时,因外力作用而抖动、部分遮挡等现象属于偶然现象,因此,负样本的数量可能不足以进行模型训练;对此,步骤S100-4-4中将样本划分为正样本和负样本,还可以包括:将不同的样本进行交叉组合,得到负样本;其 中,交叉组合的到的负样本包括与多个摄像头一一对应的、在不同次拍摄所得的多个图像。Generally, the images in the biometric image combination after spatial alignment are the same. However, when the biometric image combination is collected by the multi-lens camera, there are phenomena such as jitter and partial occlusion due to external force, which makes the biological features after the spatial alignment process The images in the image combination may have slight differences; based on this, the samples are divided into positive samples and negative samples in step S100-4-4, including: determining whether the multiple images included in each sample are consistent, and if they are consistent, the corresponding The sample of is determined as a positive sample; if it is inconsistent, the corresponding sample is determined as a negative sample. Further, considering that when a combination of biometric images is collected by a multi-lens camera, the phenomena such as shaking and partial occlusion due to external forces are accidental phenomena. Therefore, the number of negative samples may not be sufficient for model training; for this, step S100-4- The sample is divided into positive samples and negative samples in 4, which may also include: cross-combining different samples to obtain negative samples; wherein, the negative samples obtained by the cross-combination include one-to-one correspondence with multiple cameras and at different times. Multiple images taken.
例如,多目摄像头为四目摄像头,为便于描述,将四目摄像头包括的四个个摄像头分别记为摄像头1、摄像头2、摄像头3、摄像头4;并将各摄像头所拍摄的图像对应记为图像1、图像2、图像3、图像4;可以将样本1中的图像2与样本2中的图像2互换,得到新的样本1和新的样本2,并将该新的样本1和新的样本2确定为负样本;还可以将样本1中的图像2替换为样本2中的图像2,将样本1中的图像3替换为样本4中的图像3,得到新的样本1,并将该新的样本1确定为负样本;需要指出的是,交叉组合的方式可以在实际应用中根据需要自行设定。For example, the multi-lens camera is a quadruple camera. For ease of description, the four cameras included in the quadruple camera are recorded as camera 1, camera 2, camera 3, and camera 4; and the images captured by each camera are correspondingly recorded as Image 1, image 2, image 3, image 4; image 2 in sample 1 can be exchanged with image 2 in sample 2 to obtain new sample 1 and new sample 2, and combine the new sample 1 and new sample 1 The sample 2 of sample 2 is determined to be a negative sample; you can also replace image 2 in sample 1 with image 2 in sample 2, and replace image 3 in sample 1 with image 3 in sample 4 to obtain a new sample 1, and The new sample 1 is determined to be a negative sample; it should be pointed out that the cross-combination method can be set according to actual needs.
步骤S100-4-6,将正样本和负样本确定为待训练的样本集。In step S100-4-6, the positive sample and the negative sample are determined as the sample set to be trained.
通过对待训练的生物特征图像组合进行第二预处理,并划分为正样本和负样本,以基于包括该正样本和负样本的样本集训练第一检测模型。具体的,如图5所示,步骤S100-6包括步骤S100-6-2至步骤S100-6-6。The second preprocessing is performed on the biometric image combination to be trained and divided into a positive sample and a negative sample to train the first detection model based on the sample set including the positive sample and the negative sample. Specifically, as shown in FIG. 5, step S100-6 includes step S100-6-2 to step S100-6-6.
S100-6-2,将样本集划分为训练集和测试集;其中,训练集和测试集包括的正样本与负样本的比例相同。S100-6-2: Divide the sample set into a training set and a test set; wherein the training set and the test set include the same proportion of positive samples and negative samples.
具体的,根据预设的正样本与负样本的比例,从样本集包括的正样本中随机选择第一数量的正样本和第二数量的负样本,并将选择的样本确定为训练集;以及,从样本集包括的正样本中随机选择第三数量的正样本和第四数量的负样本,并将选择的样本确定为测试集;其中,当第一数量与第三数量相同时,第二数量与第四数量相同;当第一数量与第三数量不同时,第二数量与第四数量不同。Specifically, according to a preset ratio of positive samples to negative samples, randomly select a first number of positive samples and a second number of negative samples from the positive samples included in the sample set, and determine the selected samples as the training set; and , Randomly select the third number of positive samples and the fourth number of negative samples from the positive samples included in the sample set, and determine the selected sample as the test set; where, when the first number is the same as the third number, the second The quantity is the same as the fourth quantity; when the first quantity is different from the third quantity, the second quantity is different from the fourth quantity.
S100-6-4,对训练集和测试集进行第三预处理,得到目标训练集和目标测试集。S100-6-4: Perform the third preprocessing on the training set and the test set to obtain the target training set and the target test set.
具体的,确定训练集中的各图像与摄像头的一一对应关系;根据确定的对应关系,从训练集中获取各摄像头所对应的图像,得到对应的训练子集;根据每个训练子集包括的图像,确定对应摄像头的转换参数;根据转换参数,对训练集和测试集中相应摄像头所对应的图像进行预设转换处理,得到目标训练集和目标测试集。Specifically, the one-to-one correspondence between each image in the training set and the camera is determined; according to the determined correspondence, the image corresponding to each camera is obtained from the training set to obtain the corresponding training subset; according to the images included in each training subset , Determine the conversion parameters of the corresponding camera; According to the conversion parameters, perform preset conversion processing on the images corresponding to the corresponding cameras in the training set and the test set to obtain the target training set and the target test set.
由于训练集和测试集所包括的图像的像素范围较大,不可控且不利于训练;因此,本说明书实施例中,根据每个训练子集包括的图像,确定对应摄像头的转换参数,并根据转换参数对训练集和测试集中相应摄像头所对应的图像进行预设转换处理,从而将各图像的像素转换至一定的区间内,以利于训练。其中,转换参数和预设转换处理均可在 实际应用中根据需要自行设定;例如转换参数包括像素均值和像素方差;预设转化处理为将每个图像的各像素点的像素值与转换参数中的像素均值相减得到相减结果,并将相减结果与像素方差相除,得到相除结果;将相除结果所构成的图像确定为目标图像;从而得到目标训练集和目标测试集。Since the pixel range of the images included in the training set and the test set is large, uncontrollable and unfavorable for training; therefore, in the embodiment of this specification, the conversion parameters of the corresponding camera are determined according to the images included in each training subset, and according to The conversion parameters perform preset conversion processing on the images corresponding to the corresponding cameras in the training set and the test set, so as to convert the pixels of each image to a certain interval to facilitate training. Among them, the conversion parameters and the preset conversion processing can be set according to the needs in actual applications; for example, the conversion parameters include pixel average and pixel variance; the preset conversion processing is the pixel value and conversion parameter of each pixel of each image The pixel mean value in the subtraction is obtained, and the subtraction result is divided by the pixel variance to obtain the division result; the image formed by the division result is determined as the target image; thus, the target training set and the target test set are obtained.
进一步的,得到各摄像头的转换参数之后,还包括:保存得到的转换参数,以在对待检测用户的生物特征图像组合进行一致性检测时,根据保存的转换参数对该生物特征图像组合中的图像进行第一预处理。Further, after obtaining the conversion parameters of each camera, it further includes: saving the obtained conversion parameters, so that when the consistency detection is performed on the biometric image combination of the user to be detected, the images in the biometric image combination are saved according to the saved conversion parameters. Perform the first pretreatment.
S100-6-6,基于目标训练集和目标测试集进行训练操作,得到第一检测模型。In S100-6-6, a training operation is performed based on the target training set and the target test set to obtain the first detection model.
具体的,将目标训练集包括的各图像输入至双胞胎网络进行二分类训练,得到初始检测模型;将目标测试集输入至得到的初始检测模型,得到检测结果,若检测结果满足预设条件,则将初始检测模型确定为第一检测模型;若检测结果不满足预设条件,则基于目标训练集重新进行模型训练,得到新的初始检测模型,并根据测试集对新的初始检测模型进行检测,直至得到第一检测模型。其中,检测结果表征生物特征图像组合中各图像一致的概率,预设条件例如为该概率大于预设概率。Specifically, each image included in the target training set is input to the twin network for binary classification training to obtain the initial detection model; the target test set is input to the obtained initial detection model to obtain the detection result, and if the detection result meets the preset condition, then Determine the initial detection model as the first detection model; if the detection result does not meet the preset conditions, retrain the model based on the target training set to obtain a new initial detection model, and perform detection on the new initial detection model based on the test set. Until the first detection model is obtained. The detection result represents the probability that each image in the biometric image combination is consistent, and the preset condition is, for example, that the probability is greater than the preset probability.
通过训练第一检测模型,以在获取到待检测用户的生物特征图像组合时,对该生物特征图像组合包括的各图像进行一致性检测,从而结合活体检测结果,确定活体检测是否存在受攻击的风险。By training the first detection model, when the biometric image combination of the user to be detected is obtained, the consistency detection of each image included in the biometric image combination is performed, thereby combining the live detection result to determine whether there is an attack in the live detection risk.
如前所述,多目摄像头的多个摄像头的拍摄视角不同,为了避免因拍摄视角不同而造成的检测结果有误,本说明书一个或多个实施例中,在对待检测用户的生物特征图像组合所包括的多个图像进行一致性检测时,首先对该多个图像进行第一预处理,具体而言,步骤S104中通过预先训练的第一检测模型,对多个图像进行一致性检测,包括步骤A2及A4。As mentioned above, the shooting angles of the multiple cameras of the multi-lens camera are different. In order to avoid incorrect detection results due to different shooting angles, in one or more embodiments of this specification, the combination of biometric images of the user to be detected When the included multiple images are tested for consistency, first preprocessing is performed on the multiple images. Specifically, in step S104, the multiple images are tested for consistency through the pre-trained first detection model, including Steps A2 and A4.
步骤A2,对多个图像进行第一预处理,得到已处理图像;具体的,如图6所示,本说明书一个或多个实施例中,步骤A2包括步骤A2-2至步骤A2-10。Step A2: Perform first preprocessing on multiple images to obtain processed images; specifically, as shown in FIG. 6, in one or more embodiments of this specification, step A2 includes step A2-2 to step A2-10.
步骤A2-2,确定多个图像与多目摄像头中的多个摄像头的一一对应关系。Step A2-2: Determine the one-to-one correspondence between the multiple images and the multiple cameras in the multi-camera.
其中,步骤A2-2的实现过程可参见前述相关描述,重复之处这里不再赘述。Among them, the implementation process of step A2-2 can be referred to the aforementioned related description, and the repetitive parts will not be repeated here.
步骤A2-4,获取多目摄像头的转换矩阵和多个摄像头中每个摄像头对应的转换参数;其中,转换矩阵为训练第一检测模型之前,对多个摄像头进行标定处理而得;转换参数为训练第一检测模型时,对待训练的样本集进行第三预处理而得。Step A2-4: Obtain the conversion matrix of the multi-camera and the conversion parameters corresponding to each of the multiple cameras; where the conversion matrix is obtained by calibrating the multiple cameras before training the first detection model; the conversion parameters are When training the first detection model, it is obtained by performing the third preprocessing on the sample set to be trained.
具体的,将多目摄像头记为N目摄像头,其中N大于等于2;获取保存的N目摄像头的N-1个转换矩阵和N个摄像头中每个摄像头对应的转换参数。Specifically, the multi-eye camera is recorded as an N-eye camera, where N is greater than or equal to 2; the saved N-1 conversion matrices of the N-eye camera and the conversion parameters corresponding to each of the N cameras are acquired.
步骤A2-6,根据转换矩阵,对多个图像进行空间对齐处理。Step A2-6: Perform spatial alignment processing on multiple images according to the conversion matrix.
具体的,根据保存的基准摄像头的摄像头信息,确定基准摄像头;将多个图像中与基准摄像头对应的图像作为基准图像,并将多个图像中除基准图像外的图像作为待对齐图像;根据获取的每个待对齐图像所对应的摄像头与基准摄像头之间的转换矩阵,对相应的待对齐图像进行空间对齐操作,使其与对应的基准图像空间对齐。Specifically, the reference camera is determined according to the saved camera information of the reference camera; the image corresponding to the reference camera among the multiple images is used as the reference image, and the images other than the reference image among the multiple images are used as the image to be aligned; The conversion matrix between the camera corresponding to each image to be aligned and the reference camera performs a spatial alignment operation on the corresponding image to be aligned to make it spatially aligned with the corresponding reference image.
步骤A2-8,根据转换参数,对对应的空间对齐处理后的图像进行预设转换处理。Step A2-8: Perform preset conversion processing on the corresponding spatially aligned image according to the conversion parameters.
其中,步骤A2-8的实现过程可参见前述相关描述,重复之处这里不再赘述。Among them, the implementation process of step A2-8 can refer to the aforementioned related description, and the repetitive parts will not be repeated here.
步骤A2-10,将预设转换处理后的图像确定为已处理图像。Step A2-10: Determine the image processed by the preset conversion as a processed image.
步骤A4,将已处理图像输入至第一检测模型,以基于第一检测模型对已处理图像进行一致性检测。Step A4: Input the processed image to the first detection model to perform consistency detection on the processed image based on the first detection model.
由于基于第一检测模型的一致性检测对于多目摄像头的盲区的活体攻击具有更好的检测能力,而基于第二检测模型的活体检测对于多目摄像头的有效成像区域的活体攻击具有更好的检测能力;因此,本说明书实施例中,根据基于第一检测模型得到的第一检测结果和基于第二检测模型得到的第二检测结果,确定待检测用户的活体检测是否存在受攻击的风险。可选地,本说明书一个或多个实施例中,步骤S106包括:根据预设的加权系数,对第一检测结果和第二检测结果进行加权计算,得到计算结果;若计算结果大于预设的第一阈值,则确定待检测用户的活体检测不存在受攻击的风险;若计算结果不大于预设的第一阈值,则确定待检测用户的活体检测存在受攻击的风险。Since the consistency detection based on the first detection model has better detection capabilities for live attacks in the blind area of the multi-eye camera, the live detection based on the second detection model has better detection capabilities for live attacks in the effective imaging area of the multi-eye camera. Detection capability; therefore, in the embodiments of this specification, according to the first detection result obtained based on the first detection model and the second detection result obtained based on the second detection model, it is determined whether the live detection of the user to be detected is at risk of being attacked. Optionally, in one or more embodiments of this specification, step S106 includes: performing a weighted calculation on the first detection result and the second detection result according to a preset weighting coefficient to obtain the calculation result; if the calculation result is greater than the preset weighting coefficient The first threshold determines that the live detection of the user to be detected is not at risk of being attacked; if the calculation result is not greater than the preset first threshold, it is determined that the live detection of the user to be detected is at risk of being attacked.
其中,加权系数和第一阈值均可以在实际应用中根据需要自行设定;由此,通过对第一检测结果和第二检测结果进行加权融合,以确定待检测用户的活体检测是否存在受攻击的风险,提升了安全性。Among them, the weighting coefficient and the first threshold can be set according to needs in actual applications; thus, the first detection result and the second detection result are weighted and fused to determine whether the live detection of the user to be detected is attacked The risk of improving safety.
或者,在本申请的一个或多个实施例中,步骤S106包括:若第一检测结果大于预设的第二阈值、且第二检测结果大于预设的第三阈值,则确定待检测用户的活体检测不存在受攻击的风险。Alternatively, in one or more embodiments of the present application, step S106 includes: if the first detection result is greater than a preset second threshold, and the second detection result is greater than a preset third threshold, determining the user’s There is no risk of attack for live detection.
具体的,确定第一检测结果是否大于预设的第二阈值,若不大于第二阈值,则确定待检测用户的生物特征图像组合所包括的多个图像不一致,待检测用户的活体检测存 在受攻击的风险;若大于第二阈值,则确定待检测用户的生物特征图像组合包括的多个图像一致,并确定第二检测结果是否大于预设的第三阈值,若大于第三阈值,则确定待检测用户的活体检测不存在受攻击的风险,若不大于第三阈值,则确定待检测用户的活体检测存在受攻击的风险。或者,确定第二检测结果是否大于预设的第三阈值,若不大于第三阈值,则确定待检测用户的活体检测存在受攻击的风险;若大于第三阈值,则确定第一检测结果是否大于预设的第二阈值,若不大于第二阈值,则确定待检测用户的生物特征图像组合所包括的多个图像不一致,待检测用户的活体检测存在受攻击的风险;若大于第二阈值,则确定待检测用户的活体检测不存在受攻击的风险。Specifically, it is determined whether the first detection result is greater than a preset second threshold, and if it is not greater than the second threshold, it is determined that the multiple images included in the biometric image combination of the user to be detected are inconsistent, and the live detection of the user to be detected is affected. The risk of attack; if it is greater than the second threshold, it is determined that the multiple images included in the combination of biometric images of the user to be detected are consistent, and it is determined whether the second detection result is greater than the preset third threshold, and if it is greater than the third threshold, it is determined The live detection of the user to be detected is not at risk of being attacked. If it is not greater than the third threshold, it is determined that the live detection of the user to be detected is at risk of being attacked. Or, it is determined whether the second detection result is greater than a preset third threshold, if it is not greater than the third threshold, it is determined that the live detection of the user to be detected is at risk of being attacked; if it is greater than the third threshold, it is determined whether the first detection result is It is greater than the preset second threshold. If it is not greater than the second threshold, it is determined that the multiple images included in the biometric image combination of the user to be detected are inconsistent, and the live detection of the user to be detected is at risk of being attacked; if it is greater than the second threshold , It is determined that there is no risk of attack for the live detection of the user to be detected.
由此,依次根据第一检测结果和第二检测结果确定待检测用户的活体检测是否存在受攻击的风险,提升了安全性。As a result, it is determined whether the live detection of the user to be detected is at risk of being attacked according to the first detection result and the second detection result in turn, which improves security.
在上述任一实施例的基础上,可选地,多目摄像头为双目摄像头。Based on any of the foregoing embodiments, optionally, the multi-lens camera is a binocular camera.
本说明书一个或多个实施例中,基于预先训练的第一检测模型和第二检测模型,对待检测用户的生物特征图像组合进行检测;由此,将一致性检测与活体检测相结合,来确定待检测用户的活体检测是否存在受攻击的风险;既确保了对多目摄像头的有效成像区域的活体攻击进行检测、又确保了对多目摄像头的盲区的活体攻击进行检测,不仅解决了多目摄像头盲区的活体攻击问题,而且极大的提升了安全性。In one or more embodiments of this specification, based on the pre-trained first detection model and the second detection model, the biometric image combination of the user to be detected is detected; thus, the consistency detection and the living body detection are combined to determine Whether the live detection of the user to be detected is at risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-camera camera, but also ensures the detection of live attacks in the blind area of the multi-camera camera. The problem of live attacks in the blind area of the camera, and greatly improves the security.
对应上述图2至图6描述的风险检测方法,基于相同的技术构思,本说明书一个或多个实施例还提供一种风险检测装置。图7为本说明书一个或多个实施例提供的一种风险检测装置的模块组成示意图,该装置用于执行图2至图6描述的风险检测方法,如图7所示,该装置包括:获取模块201,其获取待检测的用户的生物特征图像组合,其中,所述生物特征图像组合包括:由多目摄像头对所述待检测用户的指定身体部位进行单次拍摄所得的多个图像;第一训练模块202,其通过预先训练的第一检测模型,对所述多个图像进行一致性检测,得到第一检测结果;以及,通过预先训练的第二检测模型,对所述多个图像进行活体检测,得到第二检测结果;确定模块203,其根据所述第一检测结果和所述第二检测结果,确定是否存在攻击风险。Corresponding to the risk detection methods described in FIGS. 2 to 6 above, based on the same technical concept, one or more embodiments of this specification also provide a risk detection device. Figure 7 is a schematic diagram of the module composition of a risk detection device provided by one or more embodiments of this specification. The device is used to execute the risk detection method described in Figures 2 to 6. As shown in Figure 7, the device includes: Module 201, which acquires a biometric image combination of a user to be detected, wherein the biometric image combination includes: multiple images obtained by a single shot of a designated body part of the user to be detected by a multi-lens camera; A training module 202, which detects the consistency of the multiple images through a pre-trained first detection model, to obtain a first detection result; and, through a pre-trained second detection model, performs a consistency check on the multiple images Living body detection obtains a second detection result; a determination module 203, which determines whether there is an attack risk based on the first detection result and the second detection result.
本说明书一个或多个实施例提供的风险检测装置,基于预先训练的第一检测模型和第二检测模型,对待检测用户的生物特征图像组合进行检测;由此,将一致性检测与活体检测相结合,来确定待检测用户的活体检测是否存在受攻击的风险;既确保了对多目摄像头的有效成像区域的活体攻击进行检测、又确保了对多目摄像头的盲区的活体攻击进行检测,不仅解决了多目摄像头盲区的活体攻击问题,而且极大的提升了安全性。The risk detection device provided by one or more embodiments of this specification detects a combination of biometric images of the user to be detected based on the pre-trained first detection model and second detection model; thus, the consistency detection is compared with the live detection Combine it to determine whether the live detection of the user to be detected is at risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-camera camera, but also ensures the detection of live attacks in the blind area of the multi-camera camera. Solve the problem of live attacks in the blind area of the multi-eye camera, and greatly improve the security.
可选地,所述第一训练模块202,对所述多个图像进行第一预处理,得到已处理图像;以及,将所述已处理图像输入至所述第一检测模型,以基于所述第一检测模型对所述已处理图像进行一致性检测。Optionally, the first training module 202 performs first preprocessing on the plurality of images to obtain processed images; and, inputs the processed images to the first detection model to be based on the The first detection model performs consistency detection on the processed image.
可选地,所述第一训练模块202,确定所述多个图像与所述多目摄像头中的多个摄像头的一一对应关系;以及,获取所述多目摄像头的转换矩阵和所述多个摄像头中每个摄像头对应的转换参数;其中,所述转换矩阵为训练所述第一检测模型之前,对所述多个摄像头进行标定处理而得;所述转换参数为训练所述第一检测模型时,对待训练的样本集进行第三预处理而得;根据所述转换矩阵,对所述多个图像进行空间对齐处理;根据所述转换参数,对对应的所述空间对齐处理后的图像进行预设转换处理;将所述预设转换处理后的图像确定为已处理图像。Optionally, the first training module 202 determines the one-to-one correspondence between the multiple images and the multiple cameras in the multi-camera; and obtains the conversion matrix of the multi-camera and the multi-camera. The conversion parameter corresponding to each camera in the two cameras; wherein, the conversion matrix is obtained by calibrating the multiple cameras before training the first detection model; the conversion parameter is the training of the first detection model In the model, the sample set to be trained is obtained by performing the third preprocessing; according to the conversion matrix, the multiple images are spatially aligned; according to the conversion parameters, the corresponding spatially aligned images are processed Perform a preset conversion process; determine the image after the preset conversion process as a processed image.
可选地,所述确定模块203,根据预设的加权系数,对所述第一检测结果和所述第二检测结果进行加权计算,得到计算结果;若所述计算结果大于预设的第一阈值,则确定所述待检测用户的活体检测不存在受攻击的风险;若所述计算结果不大于预设的第一阈值,则确定所述待检测用户的活体检测存在受攻击的风险。Optionally, the determining module 203 performs a weighted calculation on the first detection result and the second detection result according to a preset weighting coefficient to obtain a calculation result; if the calculation result is greater than the preset first detection result Threshold, it is determined that the live detection of the user to be detected is not at risk of being attacked; if the calculation result is not greater than the preset first threshold, it is determined that the live detection of the user to be detected is at risk of being attacked.
可选地,所述确定模块203,若确定所述第一检测结果大于预设的第二阈值、且所述第二检测结果大于预设的第三阈值,则确定所述待检测用户的活体检测不存在受攻击的风险。Optionally, the determining module 203, if it is determined that the first detection result is greater than a preset second threshold, and the second detection result is greater than a preset third threshold, determine the living body of the user to be detected There is no risk of being attacked by detection.
可选地,所述装置还包括:第二训练模块;所述第二训练模块,获取所述多目摄像头采集的待训练的生物特征图像组合;以及,对所述待训练的生物特征图像组合进行第二预处理,得到待训练的样本集;基于所述样本集训练所述第一检测模型。Optionally, the device further includes: a second training module; the second training module obtains a combination of biometric images to be trained collected by the multi-lens camera; and combines the biometric images to be trained Performing second preprocessing to obtain a sample set to be trained; training the first detection model based on the sample set.
可选地,所述装置还包括:标定模块;所述标定模块,对所述多目摄像头中的多个摄像头进行标定处理,得到转换矩阵;所述第二训练模块,根据所述转换矩阵,对所述待训练的生物特征图像组合中的图像进行空间对齐处理;以及,将所述空间对齐处理后的生物特征图像组合作为样本,并将所述样本划分为正样本和负样本;其中,所述正样本中的各图像一致,所述负样本中的各图像不一致;将所述正样本和所述负样本确定为待训练的样本集。Optionally, the device further includes: a calibration module; the calibration module performs calibration processing on multiple cameras in the multi-camera to obtain a conversion matrix; the second training module, based on the conversion matrix, Spatial alignment processing is performed on the images in the biometric image combination to be trained; and, the biometric image combination after the spatial alignment processing is used as a sample, and the sample is divided into a positive sample and a negative sample; wherein, Each image in the positive sample is consistent, and each image in the negative sample is not consistent; the positive sample and the negative sample are determined as a sample set to be trained.
可选地,所述第二训练模块,将所述样本集划分为训练集和测试集;其中,所述训练集和所述测试集包括的所述正样本与所述负样本的比例相同;以及,对所述训练集和所述测试集进行第三预处理,得到目标训练集和目标测试集;基于所述目标训练集和 所述目标测试集进行训练操作,得到第一检测模型。Optionally, the second training module divides the sample set into a training set and a test set; wherein the training set and the test set include the same proportion of the positive sample and the negative sample; And, performing a third preprocessing on the training set and the test set to obtain a target training set and a target test set; performing a training operation based on the target training set and the target test set to obtain a first detection model.
可选地,所述第二训练模块,确定所述训练集中的各图像与所述摄像头的一一对应关系;以及,根据确定的所述对应关系,从所述训练集中获取各所述摄像头所对应的图像,得到对应的训练子集;根据每个所述训练子集包括的图像,确定对应摄像头的转换参数;根据所述转换参数,对所述训练集和所述测试集中相应摄像头所对应的图像进行预设转换处理,得到目标训练集和目标测试集。Optionally, the second training module determines a one-to-one correspondence between each image in the training set and the camera; and, according to the determined correspondence, acquires the location of each camera from the training set. Corresponding images to obtain the corresponding training subset; according to the images included in each training subset, determine the conversion parameters of the corresponding camera; according to the conversion parameters, the training set and the test set corresponding to the camera Perform preset conversion processing on the image to obtain the target training set and target test set.
本说明书一个或多个实施例提供的风险检测装置,基于预先训练的第一检测模型和第二检测模型,对待检测用户的生物特征图像组合进行检测;由此,将一致性检测与活体检测相结合,来确定待检测用户的活体检测是否存在受攻击的风险;既确保了对多目摄像头的有效成像区域的活体攻击进行检测、又确保了对多目摄像头的盲区的活体攻击进行检测,不仅解决了多目摄像头盲区的活体攻击问题,而且极大的提升了安全性。The risk detection device provided by one or more embodiments of this specification detects a combination of biometric images of the user to be detected based on the pre-trained first detection model and second detection model; thus, the consistency detection is compared with the live detection Combine it to determine whether the live detection of the user to be detected is at risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-camera camera, but also ensures the detection of live attacks in the blind area of the multi-camera camera. Solve the problem of live attacks in the blind area of the multi-eye camera, and greatly improve the security.
需要说明的是,本说明书中关于风险检测装置的实施例与本说明书中关于风险检测方法的实施例基于同一发明构思,因此该实施例的具体实施可以参见前述对应的风险检测方法的实施,重复之处不再赘述。It should be noted that the embodiment of the risk detection device in this specification and the embodiment of the risk detection method in this specification are based on the same inventive concept. Therefore, the specific implementation of this embodiment can refer to the implementation of the corresponding risk detection method mentioned above. I won't repeat it here.
进一步地,对应上述图2至图6描述的风险检测方法,基于相同的技术构思,本说明书一个或多个实施例还提供一种风险检测设备,该设备用于执行上述的风险检测方法,图8为本说明书一个或多个实施例提供的一种风险检测设备的结构示意图。Further, corresponding to the risk detection method described in FIGS. 2 to 6 above, based on the same technical concept, one or more embodiments of this specification also provide a risk detection device, which is used to execute the above risk detection method. 8 is a schematic structural diagram of a risk detection device provided by one or more embodiments of this specification.
如图8所示,风险检测设备可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上的处理器301和存储器302,存储器302中可以存储有一个或一个以上存储应用程序或数据。其中,存储器302可以是短暂存储或持久存储。存储在存储器302的应用程序可以包括一个或一个以上模块(图示未示出),每个模块可以包括风险检测设备中的一系列计算机可执行指令。更进一步地,处理器301可以设置为与存储器302通信,在风险检测设备上执行存储器302中的一系列计算机可执行指令。风险检测设备还可以包括一个或一个以上电源303,一个或一个以上有线或无线网络接口304,一个或一个以上输入输出接口305,一个或一个以上键盘306等。As shown in FIG. 8, the risk detection equipment may have relatively large differences due to different configurations or performances, and may include one or more processors 301 and a memory 302. The memory 302 may store one or more storage application programs or data. Among them, the memory 302 may be short-term storage or persistent storage. The application program stored in the memory 302 may include one or more modules (not shown in the figure), and each module may include a series of computer-executable instructions in the risk detection device. Furthermore, the processor 301 may be configured to communicate with the memory 302, and execute a series of computer-executable instructions in the memory 302 on the risk detection device. The risk detection device may also include one or more power supplies 303, one or more wired or wireless network interfaces 304, one or more input and output interfaces 305, one or more keyboards 306, and so on.
在一个具体的实施例中,风险检测设备包括有存储器,以及一个或一个以上的程序,其中一个或者一个以上程序存储于存储器中,且一个或者一个以上程序可以包括一个或一个以上模块,且每个模块可以包括对风险检测设备中的一系列计算机可执行指令,且经配置以由一个或者一个以上处理器执行该一个或者一个以上程序包含用于进行以 下计算机可执行指令:获取待检测用户的生物特征图像组合,其中,所述生物特征图像组合包括:由多目摄像头对所述待检测用户的指定身体部位进行单次拍摄所得的多个图像;通过预先训练的第一检测模型,对所述多个图像进行一致性检测,得到第一检测结果;以及,通过预先训练的第二检测模型,对所述多个图像进行活体检测,得到第二检测结果;根据所述第一检测结果和所述第二检测结果,确定所述待检测用户的活体检测是否存在受攻击的风险。In a specific embodiment, the risk detection device includes a memory and one or more programs, wherein one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each Each module may include a series of computer-executable instructions for the risk detection equipment, and the one or more programs configured to be executed by one or more processors include computer-executable instructions for performing the following: The biometric image combination, wherein the biometric image combination includes: multiple images obtained by a single shot of a designated body part of the user to be detected by a multi-lens camera; Performing consistency detection on the multiple images to obtain a first detection result; and, performing live detection on the multiple images through a pre-trained second detection model to obtain a second detection result; according to the first detection result and The second detection result determines whether the live detection of the user to be detected is at risk of being attacked.
本说明书一个或多个实施例中,用户可以在包含虚拟机器人的群组中,直接与虚拟机器人交互,输入待查询舆情的链接地址即可达到舆情准确查询的目的;而无需用户预先提取待查询舆情的关键词、也无需第一客户端罗列展示多个舆情供用户选择,不仅提升了查询效率、提升了用户体验,而且链接地址更具确定性,确保了查询结果的准确性。In one or more embodiments of this specification, the user can directly interact with the virtual robot in a group containing virtual robots, and enter the link address of the public opinion to be queried to achieve the purpose of accurate public opinion query; without the user having to extract the query to be queried in advance The keywords of public opinion, and there is no need for the first client to list and display multiple public opinions for users to choose, which not only improves query efficiency and user experience, but also the link address is more certain, ensuring the accuracy of query results.
可选地,计算机可执行指令在被执行时,所述通过预先训练的第一检测模型,对所述多个图像进行一致性检测,包括:对所述多个图像进行第一预处理,得到已处理图像;将所述已处理图像输入至所述第一检测模型,以基于所述第一检测模型对所述已处理图像进行一致性检测。Optionally, when the computer-executable instructions are executed, the checking the consistency of the plurality of images through the pre-trained first detection model includes: performing first preprocessing on the plurality of images to obtain Processed image; input the processed image to the first detection model to perform consistency detection on the processed image based on the first detection model.
可选地,计算机可执行指令在被执行时,所述对所述多个图像进行第一预处理,得到已处理图像,包括:确定所述多个图像与所述多目摄像头中的多个摄像头的一一对应关系;获取所述多目摄像头的转换矩阵和所述多个摄像头中每个摄像头对应的转换参数;其中,所述转换矩阵为训练所述第一检测模型之前,对所述多个摄像头进行标定处理而得;所述转换参数为训练所述第一检测模型时,对待训练的样本集进行第三预处理而得;根据所述转换矩阵,对所述多个图像进行空间对齐处理;根据所述转换参数,对对应的所述空间对齐处理后的图像进行预设转换处理;将所述预设转换处理后的图像确定为已处理图像。Optionally, when the computer-executable instructions are executed, the performing the first preprocessing on the multiple images to obtain the processed images includes: determining that the multiple images and multiple of the multi-cameras One-to-one correspondence between cameras; acquiring the conversion matrix of the multi-lens camera and the conversion parameter corresponding to each camera in the plurality of cameras; wherein, the conversion matrix is before the training of the first detection model, the A plurality of cameras are obtained by calibration processing; the conversion parameter is obtained by performing a third preprocessing on the sample set to be trained when the first detection model is trained; and the plurality of images are spatially processed according to the conversion matrix Alignment processing; performing preset conversion processing on the corresponding spatially aligned image according to the conversion parameter; determining the image after the preset conversion processing as a processed image.
可选地,计算机可执行指令在被执行时,所述根据所述第一检测结果和所述第二检测结果,确定所述待检测用户的活体检测是否存在受攻击的风险,包括:根据预设的加权系数,对所述第一检测结果和所述第二检测结果进行加权计算,得到计算结果;若所述计算结果大于预设的第一阈值,则确定所述待检测用户的活体检测不存在受攻击的风险;若所述计算结果不大于预设的第一阈值,则确定所述待检测用户的活体检测存在受攻击的风险。Optionally, when the computer-executable instructions are executed, the determining whether the live detection of the user to be detected is at risk of being attacked according to the first detection result and the second detection result includes: Set the weighting coefficient to perform weighting calculation on the first detection result and the second detection result to obtain the calculation result; if the calculation result is greater than the preset first threshold, determine the live detection of the user to be detected There is no risk of being attacked; if the calculation result is not greater than the preset first threshold, it is determined that the live detection of the user to be detected is at risk of being attacked.
可选地,计算机可执行指令在被执行时,所述根据所述第一检测结果和所述第二 检测结果,确定所述待检测用户的活体检测是否存在受攻击的风险,包括:若所述第一检测结果大于预设的第二阈值、且所述第二检测结果大于预设的第三阈值,则确定所述待检测用户的活体检测不存在受攻击的风险。Optionally, when the computer-executable instructions are executed, the determining whether the live detection of the user to be detected is at risk of being attacked according to the first detection result and the second detection result includes: If the first detection result is greater than the preset second threshold, and the second detection result is greater than the preset third threshold, it is determined that there is no risk of attack in the live detection of the user to be detected.
可选地,计算机可执行指令在被执行时,所述获取待检测的用户的生物特征图像组合之前,还包括:获取所述多目摄像头采集的待训练的生物特征图像组合;对所述待训练的生物特征图像组合进行第二预处理,得到待训练的样本集;基于所述样本集训练所述第一检测模型。Optionally, when the computer-executable instructions are executed, before the obtaining the combination of biometric images of the user to be detected, the method further includes: obtaining the combination of biometric images to be trained collected by the multi-camera; The trained biometric image combination is subjected to second preprocessing to obtain a sample set to be trained; and the first detection model is trained based on the sample set.
可选地,计算机可执行指令在被执行时,所述对所述待训练的生物特征图像组合进行第二预处理之前,还包括:对所述多目摄像头中的多个摄像头进行标定处理,得到转换矩阵;所述对所述待训练的生物特征图像组合进行第二预处理,得到待训练的样本集,包括:根据所述转换矩阵,对所述待训练的生物特征图像组合中的图像进行空间对齐处理;将所述空间对齐处理后的生物特征图像组合作为样本,并将所述样本划分为正样本和负样本;其中,所述正样本中的各图像一致,所述负样本中的各图像不一致;将所述正样本和所述负样本确定为待训练的样本集。Optionally, when the computer-executable instructions are executed, before the second preprocessing is performed on the biometric image combination to be trained, the method further includes: calibrating multiple cameras in the multi-camera, Obtain a conversion matrix; the second preprocessing is performed on the combination of biometric images to be trained to obtain a sample set to be trained, including: according to the conversion matrix, the images in the combination of biometric images to be trained Perform spatial alignment processing; take the biometric image combination after the spatial alignment processing as a sample, and divide the sample into a positive sample and a negative sample; wherein each image in the positive sample is consistent, and the negative sample The images of are inconsistent; the positive sample and the negative sample are determined as the sample set to be trained.
可选地,计算机可执行指令在被执行时,所述基于所述样本集训练所述第一检测模型,包括:将所述样本集划分为训练集和测试集;其中,所述训练集和所述测试集包括的所述正样本与所述负样本的比例相同;对所述训练集和所述测试集进行第三预处理,得到目标训练集和目标测试集;基于所述目标训练集和所述目标测试集进行训练操作,得到第一检测模型。Optionally, when the computer-executable instructions are executed, the training of the first detection model based on the sample set includes: dividing the sample set into a training set and a test set; wherein the training set and The test set includes the same ratio of the positive sample and the negative sample; the third preprocessing is performed on the training set and the test set to obtain a target training set and a target test set; based on the target training set Perform a training operation with the target test set to obtain a first detection model.
可选地,计算机可执行指令在被执行时,所述对所述训练集和所述测试集进行第三预处理,得到目标训练集和目标测试集,包括:确定所述训练集中的各图像与所述摄像头的一一对应关系;根据确定的所述对应关系,从所述训练集中获取各所述摄像头所对应的图像,得到对应的训练子集;根据每个所述训练子集包括的图像,确定对应摄像头的转换参数;根据所述转换参数,对所述训练集和所述测试集中相应摄像头所对应的图像进行预设转换处理,得到目标训练集和目标测试集。Optionally, when the computer-executable instructions are executed, the third preprocessing of the training set and the test set to obtain the target training set and the target test set includes: determining each image in the training set One-to-one correspondence with the camera; according to the determined correspondence, the image corresponding to each camera is obtained from the training set to obtain the corresponding training subset; according to the training subset included in each Image, determine the conversion parameters of the corresponding camera; according to the conversion parameters, perform preset conversion processing on the images corresponding to the corresponding cameras in the training set and the test set to obtain a target training set and a target test set.
本说明书一个或多个实施例提供的风险检测设备,基于预先训练的第一检测模型和第二检测模型,对待检测用户的生物特征图像组合进行检测;由此,将一致性检测与活体检测相结合,来确定待检测用户的活体检测是否存在受攻击的风险;既确保了对多目摄像头的有效成像区域的活体攻击进行检测、又确保了对多目摄像头的盲区的活体攻击进行检测,不仅解决了多目摄像头盲区的活体攻击问题,而且极大的提升了安全性。The risk detection device provided by one or more embodiments of this specification detects a combination of biometric images of the user to be detected based on the pre-trained first detection model and second detection model; thus, the consistency detection is compared with the live detection Combine it to determine whether the live detection of the user to be detected is at risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-camera camera, but also ensures the detection of live attacks in the blind area of the multi-camera camera. Solve the problem of live attacks in the blind area of the multi-eye camera, and greatly improve the security.
需要说明的是,本说明书中关于风险检测设备的实施例与本说明书中关于风险检测方法的实施例基于同一发明构思,因此该实施例的具体实施可以参见前述对应的风险检测方法的实施,重复之处不再赘述。It should be noted that the embodiment of the risk detection device in this specification and the embodiment of the risk detection method in this specification are based on the same inventive concept. Therefore, the specific implementation of this embodiment can refer to the implementation of the corresponding risk detection method mentioned above. I won't repeat it here.
进一步地,对应上述图2至图6所示的风险检测方法,基于相同的技术构思,本说明书一个或多个实施例还提供了一种存储介质,用于存储计算机可执行指令,一个具体的实施例中,该存储介质可以为U盘、光盘、硬盘等,该存储介质存储的计算机可执行指令在被处理器执行时,能实现以下流程:获取待检测用户的生物特征图像组合,其中,所述生物特征图像组合包括:由多目摄像头对所述待检测用户的指定身体部位进行单次拍摄所得的多个图像;通过预先训练的第一检测模型,对所述多个图像进行一致性检测,得到第一检测结果;以及,通过预先训练的第二检测模型,对所述多个图像进行活体检测,得到第二检测结果;根据所述第一检测结果和所述第二检测结果,确定所述待检测用户的活体检测是否存在受攻击的风险。Further, corresponding to the risk detection methods shown in FIGS. 2 to 6 above, based on the same technical concept, one or more embodiments of this specification also provide a storage medium for storing computer-executable instructions, a specific In an embodiment, the storage medium may be a U disk, an optical disk, a hard disk, etc., and the computer-executable instructions stored in the storage medium can realize the following process when executed by the processor: obtaining a combination of biometric images of the user to be detected, where: The biometric image combination includes: multiple images obtained by a single shot of the designated body part of the user to be detected by a multi-lens camera; and the consistency of the multiple images is performed through a pre-trained first detection model Detection to obtain a first detection result; and, by using a pre-trained second detection model, perform a living detection on the plurality of images to obtain a second detection result; according to the first detection result and the second detection result, It is determined whether the live detection of the user to be detected is at risk of being attacked.
本说明书一个或多个实施例中,基于预先训练的第一检测模型和第二检测模型,对待检测用户的生物特征图像组合进行检测;由此,将一致性检测与活体检测相结合,来确定待检测用户的活体检测是否存在受攻击的风险;既确保了对多目摄像头的有效成像区域的活体攻击进行检测、又确保了对多目摄像头的盲区的活体攻击进行检测,不仅解决了多目摄像头盲区的活体攻击问题,而且极大的提升了安全性。In one or more embodiments of this specification, based on the pre-trained first detection model and second detection model, the biometric image combination of the user to be detected is detected; thus, the consistency detection and the living body detection are combined to determine Whether the live detection of the user to be detected is at risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-camera camera, but also ensures the detection of live attacks in the blind area of the multi-camera camera. The problem of live attacks in the blind area of the camera, and greatly improves the security.
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述通过预先训练的第一检测模型,对所述多个图像进行一致性检测,包括:对所述多个图像进行第一预处理,得到已处理图像;将所述已处理图像输入至所述第一检测模型,以基于所述第一检测模型对所述已处理图像进行一致性检测。Optionally, when the computer-executable instructions stored in the storage medium are executed by the processor, the first detection model that is pre-trained to perform consistency detection on the multiple images includes: Perform a first preprocessing to obtain a processed image; input the processed image to the first detection model to perform consistency detection on the processed image based on the first detection model.
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述对所述多个图像进行第一预处理,得到已处理图像,包括:确定所述多个图像与所述多目摄像头中的多个摄像头的一一对应关系;获取所述多目摄像头的转换矩阵和所述多个摄像头中每个摄像头对应的转换参数;其中,所述转换矩阵为训练所述第一检测模型之前,对所述多个摄像头进行标定处理而得;所述转换参数为训练所述第一检测模型时,对待训练的样本集进行第三预处理而得;根据所述转换矩阵,对所述多个图像进行空间对齐处理;根据所述转换参数,对对应的所述空间对齐处理后的图像进行预设转换处理;将所述预设转换处理后的图像确定为已处理图像。Optionally, when the computer-executable instructions stored in the storage medium are executed by the processor, the performing the first preprocessing on the plurality of images to obtain the processed images includes: determining the relationship between the plurality of images and the One-to-one correspondence of the multiple cameras in the multi-camera; acquiring the conversion matrix of the multi-camera and the conversion parameters corresponding to each of the multiple cameras; wherein the conversion matrix is for training the first Before detecting the model, it is obtained by calibrating the multiple cameras; the conversion parameter is obtained by performing the third preprocessing on the sample set to be trained when training the first detection model; according to the conversion matrix, Performing spatial alignment processing on the plurality of images; performing preset conversion processing on the corresponding spatial alignment processed image according to the conversion parameter; determining the image after the preset conversion processing as a processed image.
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述根据所述 第一检测结果和所述第二检测结果,确定所述待检测用户的活体检测是否存在受攻击的风险,包括:根据预设的加权系数,对所述第一检测结果和所述第二检测结果进行加权计算,得到计算结果;若所述计算结果大于预设的第一阈值,则确定所述待检测用户的活体检测不存在受攻击的风险;若所述计算结果不大于预设的第一阈值,则确定所述待检测用户的活体检测存在受攻击的风险。Optionally, when the computer executable instructions stored in the storage medium are executed by the processor, the first detection result and the second detection result are used to determine whether the live detection of the user to be detected is attacked The risk includes: performing a weighted calculation on the first detection result and the second detection result according to a preset weighting coefficient to obtain a calculation result; if the calculation result is greater than a preset first threshold, determining all The live detection of the user to be detected is not at risk of being attacked; if the calculation result is not greater than the preset first threshold, it is determined that the live detection of the user to be detected is at risk of being attacked.
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述根据所述第一检测结果和所述第二检测结果,确定所述待检测用户的活体检测是否存在受攻击的风险,包括:若所述第一检测结果大于预设的第二阈值、且所述第二检测结果大于预设的第三阈值,则确定所述待检测用户的活体检测不存在受攻击的风险。Optionally, when the computer executable instructions stored in the storage medium are executed by the processor, the first detection result and the second detection result are used to determine whether the live detection of the user to be detected is attacked The risk includes: if the first detection result is greater than a preset second threshold, and the second detection result is greater than a preset third threshold, determining that the live detection of the user to be detected is not attacked risk.
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述获取待检测的用户的生物特征图像组合之前,还包括:获取所述多目摄像头采集的待训练的生物特征图像组合;对所述待训练的生物特征图像组合进行第二预处理,得到待训练的样本集;基于所述样本集训练所述第一检测模型。Optionally, when the computer-executable instructions stored in the storage medium are executed by the processor, before the obtaining the combination of biometric images of the user to be detected, the method further includes: obtaining biometrics to be trained collected by the multi-lens camera Image combination; performing a second preprocessing on the biometric image combination to be trained to obtain a sample set to be trained; training the first detection model based on the sample set.
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述对所述待训练的生物特征图像组合进行第二预处理之前,还包括:对所述多目摄像头中的多个摄像头进行标定处理,得到转换矩阵;所述对所述待训练的生物特征图像组合进行第二预处理,得到待训练的样本集,包括:根据所述转换矩阵,对所述待训练的生物特征图像组合中的图像进行空间对齐处理;将所述空间对齐处理后的生物特征图像组合作为样本,并将所述样本划分为正样本和负样本;其中,所述正样本中的各图像一致,所述负样本中的各图像不一致;将所述正样本和所述负样本确定为待训练的样本集。Optionally, when the computer-executable instructions stored in the storage medium are executed by the processor, before the second preprocessing is performed on the biometric image combination to be trained, the method further includes: A plurality of cameras are calibrated to obtain a conversion matrix; the second preprocessing of the combination of biometric images to be trained to obtain a sample set to be trained includes: according to the conversion matrix, performing the second preprocessing on the to-be-trained The images in the biometric image combination are subjected to spatial alignment processing; the biometric image combination after the spatial alignment processing is used as a sample, and the sample is divided into a positive sample and a negative sample; wherein each image in the positive sample Consistent, the images in the negative sample are not consistent; the positive sample and the negative sample are determined as the sample set to be trained.
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述基于所述样本集训练所述第一检测模型,包括:将所述样本集划分为训练集和测试集;其中,所述训练集和所述测试集包括的所述正样本与所述负样本的比例相同;对所述训练集和所述测试集进行第三预处理,得到目标训练集和目标测试集;基于所述目标训练集和所述目标测试集进行训练操作,得到第一检测模型。Optionally, when the computer-executable instructions stored in the storage medium are executed by the processor, the training of the first detection model based on the sample set includes: dividing the sample set into a training set and a test set; Wherein, the training set and the test set include the same proportion of the positive sample and the negative sample; the third preprocessing is performed on the training set and the test set to obtain a target training set and a target test set ; Perform a training operation based on the target training set and the target test set to obtain a first detection model.
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述对所述训练集和所述测试集进行第三预处理,得到目标训练集和目标测试集,包括:确定所述训练集中的各图像与所述摄像头的一一对应关系;根据确定的所述对应关系,从所述训练集中获取各所述摄像头所对应的图像,得到对应的训练子集;根据每个所述训练子集包括的图像,确定对应摄像头的转换参数;根据所述转换参数,对所述训练集和所述测试 集中相应摄像头所对应的图像进行预设转换处理,得到目标训练集和目标测试集。Optionally, when the computer executable instructions stored in the storage medium are executed by the processor, the third preprocessing is performed on the training set and the test set to obtain the target training set and the target test set, including: determining The one-to-one correspondence between each image in the training set and the camera; according to the determined correspondence, the image corresponding to each camera is obtained from the training set to obtain a corresponding training subset; For the images included in the training subset, the conversion parameters of the corresponding cameras are determined; according to the conversion parameters, preset conversion processing is performed on the images corresponding to the corresponding cameras in the training set and the test set to obtain the target training set and the target Test set.
本说明书一个或多个实施例提供的存储介质存储的计算机可执行指令在被处理器执行时,基于预先训练的第一检测模型和第二检测模型,对待检测用户的生物特征图像组合进行检测;由此,将一致性检测与活体检测相结合,来确定待检测用户的活体检测是否存在受攻击的风险;既确保了对多目摄像头的有效成像区域的活体攻击进行检测、又确保了对多目摄像头的盲区的活体攻击进行检测,不仅解决了多目摄像头盲区的活体攻击问题,而且极大的提升了安全性。When the computer-executable instructions stored in the storage medium provided by one or more embodiments of the present specification are executed by the processor, they are detected based on the pre-trained first detection model and the second detection model, based on the combination of biometric images of the user to be detected; Therefore, the consistency detection and the live detection are combined to determine whether the live detection of the user to be detected is at risk of being attacked; it not only ensures the detection of live attacks in the effective imaging area of the multi-eye camera, but also ensures the detection of multiple The detection of live attacks in the blind area of the eye camera not only solves the problem of live attacks in the blind area of the multi-eye camera, but also greatly improves the security.
需要说明的是,本说明书中关于存储介质的实施例与本说明书中关于风险检测方法的实施例基于同一发明构思,因此该实施例的具体实施可以参见前述对应的风险检测方法的实施,重复之处不再赘述。It should be noted that the embodiment of the storage medium in this specification and the embodiment of the risk detection method in this specification are based on the same inventive concept. Therefore, the specific implementation of this embodiment can refer to the implementation of the corresponding risk detection method mentioned above, and repeat it. I won't repeat it here.
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。The foregoing describes specific embodiments of this specification. Other embodiments are within the scope of the appended claims. In some cases, the actions or steps described in the claims can be performed in a different order than in the embodiments and still achieve desired results. In addition, the processes depicted in the drawings do not necessarily require the specific order or sequential order shown in order to achieve the desired results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
在20世纪30年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、 Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。In the 1930s, the improvement of a technology can be clearly distinguished between hardware improvements (for example, improvements in circuit structures such as diodes, transistors, switches, etc.) and software improvements (improvements in method flow). However, with the development of technology, the improvement of many methods and processes of today can be regarded as a direct improvement of the hardware circuit structure. Designers almost always get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by the hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (such as a Field Programmable Gate Array (Field Programmable Gate Array, FPGA)) is such an integrated circuit whose logic function is determined by the user's programming of the device. It is programmed by the designer to "integrate" a digital system on a piece of PLD, without requiring chip manufacturers to design and manufacture dedicated integrated circuit chips. Moreover, nowadays, instead of manually making integrated circuit chips, this kind of programming is mostly realized with "logic compiler" software, which is similar to the software compiler used in program development and writing, but before compilation The original code must also be written in a specific programming language, which is called Hardware Description Language (HDL), and there is not only one type of HDL, but many types, such as ABEL (Advanced Boolean Expression Language) , AHDL (Altera Hardware Description Language), Confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), Lava, Lola, MyHDL, PALASM, RHDL (Ruby Hardware Description), etc., currently most commonly used It is VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog. It should also be clear to those skilled in the art that just a little bit of logic programming of the method flow in the above-mentioned hardware description languages and programming into an integrated circuit can easily obtain the hardware circuit that implements the logic method flow.
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。The controller can be implemented in any suitable manner. For example, the controller can take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (such as software or firmware) executable by the (micro)processor. , Logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers. Examples of controllers include but are not limited to the following microcontrollers: ARC625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as part of the memory control logic. Those skilled in the art also know that, in addition to implementing the controller in a purely computer-readable program code manner, it is entirely possible to program the method steps to make the controller use logic gates, switches, application specific integrated circuits, programmable logic controllers and embedded The same function can be realized in the form of a microcontroller, etc. Therefore, such a controller can be regarded as a hardware component, and the devices included in it for realizing various functions can also be regarded as a structure within the hardware component. Or even, the device for realizing various functions can be regarded as both a software module for realizing the method and a structure within a hardware component.
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。The systems, devices, modules, or units illustrated in the above embodiments may be specifically implemented by computer chips or entities, or implemented by products with certain functions. A typical implementation device is a computer. Specifically, the computer can be, for example, a personal computer, a laptop computer, a cell phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or Any combination of these devices.
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本说明书实施例时可以把各单元的功能在同一个或多个软件和/或硬件中实现。For the convenience of description, when describing the above device, the functions are divided into various units and described separately. Of course, when implementing the embodiments of this specification, the functions of each unit can be implemented in the same one or more software and/or hardware.
本领域内的技术人员应明白,本说明书一个或多个实施例可提供为方法、系统或计算机程序产品。因此,本说明书一个或多个实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本说明书可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that one or more embodiments of this specification can be provided as a method, a system, or a computer program product. Therefore, one or more embodiments of this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this specification can take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
本说明书是参照根据本说明书实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提 供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。This specification is described with reference to flowcharts and/or block diagrams of methods, devices (systems), and computer program products according to the embodiments of this specification. It should be understood that each process and/or block in the flowchart and/or block diagram, and the combination of processes and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, an embedded processor, or other programmable data processing equipment to generate a machine, so that the instructions executed by the processor of the computer or other programmable data processing equipment are used to generate It is a device that realizes the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device. The device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment. The instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。In a typical configuration, the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。The memory may include non-permanent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology. The information can be computer-readable instructions, data structures, program modules, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有 的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。It should also be noted that the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, commodity or equipment including a series of elements includes not only those elements, but also Other elements that are not explicitly listed, or also include elements inherent to such processes, methods, commodities, or equipment. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, commodity, or equipment that includes the element.
本说明书一个或多个实施例可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本说明书的一个或多个实施例,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。One or more embodiments of this specification may be described in the general context of computer-executable instructions executed by a computer, such as program modules. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types. One or more embodiments of this specification can also be practiced in distributed computing environments. In these distributed computing environments, tasks are performed by remote processing devices connected through a communication network. In a distributed computing environment, program modules can be located in local and remote computer storage media including storage devices.
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。The various embodiments in this specification are described in a progressive manner, and the same or similar parts between the various embodiments can be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, as for the system embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
以上所述仅为本文件的实施例而已,并不用于限制本文件。对于本领域技术人员来说,本文件可以有各种更改和变化。凡在本文件的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本文件的权利要求范围之内。The above descriptions are only examples of this document, and are not intended to limit this document. For those skilled in the art, this document can have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this document shall be included in the scope of the claims of this document.

Claims (16)

  1. 一种风险检测方法,包括:A risk detection method, including:
    获取待检测用户的生物特征图像组合,其中,所述生物特征图像组合包括:由多目摄像头对所述待检测用户的指定身体部位进行单次拍摄所得的多个图像;Acquiring a biometric image combination of the user to be detected, where the biometric image combination includes: multiple images obtained by a single shot of a designated body part of the user to be detected by a multi-lens camera;
    通过预先训练的第一检测模型,对所述多个图像进行一致性检测,得到第一检测结果;以及,通过预先训练的第二检测模型,对所述多个图像进行活体检测,得到第二检测结果;Perform consistency detection on the multiple images through a pre-trained first detection model to obtain a first detection result; and, through a pre-trained second detection model, perform a living detection on the multiple images to obtain a second Test results;
    根据所述第一检测结果和所述第二检测结果,确定所述待检测用户的活体检测是否存在受攻击的风险。According to the first detection result and the second detection result, it is determined whether the live detection of the to-be-detected user is at risk of being attacked.
  2. 根据权利要求1所述的方法,所述通过预先训练的第一检测模型,对所述多个图像进行一致性检测,包括:The method according to claim 1, wherein said detecting the consistency of the plurality of images through the pre-trained first detection model comprises:
    对所述多个图像进行第一预处理,得到已处理图像;Performing first preprocessing on the plurality of images to obtain processed images;
    将所述已处理图像输入至所述第一检测模型,以基于所述第一检测模型对所述已处理图像进行一致性检测。The processed image is input to the first detection model to perform consistency detection on the processed image based on the first detection model.
  3. 根据权利要求2所述的方法,所述对所述多个图像进行第一预处理,得到已处理图像,包括:The method according to claim 2, wherein the performing the first preprocessing on the plurality of images to obtain the processed images includes:
    确定所述多个图像与所述多目摄像头中的多个摄像头的一一对应关系;Determining a one-to-one correspondence between the multiple images and the multiple cameras in the multi-camera;
    获取所述多目摄像头的转换矩阵和所述多个摄像头中每个摄像头对应的转换参数;其中,所述转换矩阵为训练所述第一检测模型之前,对所述多个摄像头进行标定处理而得;所述转换参数为训练所述第一检测模型时,对待训练的样本集进行第三预处理而得;Acquire the conversion matrix of the multi-lens camera and the conversion parameter corresponding to each camera of the multiple cameras; wherein, the conversion matrix is obtained by calibrating the multiple cameras before training the first detection model得; The conversion parameter is obtained by performing the third preprocessing on the sample set to be trained when training the first detection model;
    根据所述转换矩阵,对所述多个图像进行空间对齐处理;Performing spatial alignment processing on the plurality of images according to the conversion matrix;
    根据所述转换参数,对对应的所述空间对齐处理后的图像进行预设转换处理;Performing preset conversion processing on the corresponding spatially aligned image according to the conversion parameter;
    将所述预设转换处理后的图像确定为已处理图像。The image after the preset conversion processing is determined as a processed image.
  4. 根据权利要求1所述的方法,所述根据所述第一检测结果和所述第二检测结果,确定所述待检测用户的活体检测是否存在受攻击的风险,包括:The method according to claim 1, wherein the determining whether the live detection of the user to be detected is at risk of being attacked according to the first detection result and the second detection result comprises:
    根据预设的加权系数,对所述第一检测结果和所述第二检测结果进行加权计算,得到计算结果;Performing a weighted calculation on the first detection result and the second detection result according to a preset weighting coefficient to obtain a calculation result;
    若所述计算结果大于预设的第一阈值,则确定所述待检测用户的活体检测不存在受攻击的风险;If the calculation result is greater than the preset first threshold, it is determined that there is no risk of being attacked in the live detection of the user to be detected;
    若所述计算结果不大于预设的第一阈值,则确定所述待检测用户的活体检测存在受攻击的风险。If the calculation result is not greater than the preset first threshold, it is determined that the live detection of the to-be-detected user is at risk of being attacked.
  5. 根据权利要求1所述的方法,所述根据所述第一检测结果和所述第二检测结果,确定所述待检测用户的活体检测是否存在受攻击的风险,包括:The method according to claim 1, wherein the determining whether the live detection of the user to be detected is at risk of being attacked according to the first detection result and the second detection result comprises:
    若所述第一检测结果大于预设的第二阈值、且所述第二检测结果大于预设的第三阈值,则确定所述待检测用户的活体检测不存在受攻击的风险。If the first detection result is greater than the preset second threshold and the second detection result is greater than the preset third threshold, it is determined that there is no risk of attack in the live detection of the user to be detected.
  6. 根据权利要求1所述的方法,所述多目摄像头为双目摄像头。The method according to claim 1, wherein the multi-eye camera is a binocular camera.
  7. 根据权利要求1-6中任一项所述的方法,所述获取待检测的用户的生物特征图像组合之前,还包括:The method according to any one of claims 1-6, before said acquiring a combination of biometric images of the user to be detected, further comprising:
    获取所述多目摄像头采集的待训练的生物特征图像组合;Acquiring a combination of biometric images to be trained collected by the multi-lens camera;
    对所述待训练的生物特征图像组合进行第二预处理,得到待训练的样本集;Performing second preprocessing on the biometric image combination to be trained to obtain a sample set to be trained;
    基于所述样本集训练所述第一检测模型。Training the first detection model based on the sample set.
  8. 根据权利要求7所述的方法,所述对所述待训练的生物特征图像组合进行第二预处理之前,还包括:The method according to claim 7, before the second preprocessing is performed on the biometric image combination to be trained, further comprising:
    对所述多目摄像头中的多个摄像头进行标定处理,得到转换矩阵;Performing calibration processing on multiple cameras in the multi-camera to obtain a conversion matrix;
    所述对所述待训练的生物特征图像组合进行第二预处理,得到待训练的样本集,包括:The second preprocessing performed on the combination of biometric images to be trained to obtain a sample set to be trained includes:
    根据所述转换矩阵,对所述待训练的生物特征图像组合中的图像进行空间对齐处理;Performing spatial alignment processing on the images in the biometric image combination to be trained according to the conversion matrix;
    将所述空间对齐处理后的生物特征图像组合作为样本,并将所述样本划分为正样本和负样本;其中,所述正样本中的各图像一致,所述负样本中的各图像不一致;Taking the biometric image combination after the spatial alignment processing as a sample, and dividing the sample into a positive sample and a negative sample; wherein each image in the positive sample is consistent, and each image in the negative sample is not consistent;
    将所述正样本和所述负样本确定为待训练的样本集。The positive sample and the negative sample are determined as a sample set to be trained.
  9. 根据权利要求8所述的方法,所述基于所述样本集训练所述第一检测模型,包括:The method according to claim 8, wherein the training of the first detection model based on the sample set comprises:
    将所述样本集划分为训练集和测试集;其中,所述训练集和所述测试集包括的所述正样本与所述负样本的比例相同;Dividing the sample set into a training set and a test set; wherein the training set and the test set include the same proportion of the positive sample and the negative sample;
    对所述训练集和所述测试集进行第三预处理,得到目标训练集和目标测试集;Performing a third preprocessing on the training set and the test set to obtain a target training set and a target test set;
    基于所述目标训练集和所述目标测试集进行训练操作,得到第一检测模型。Perform a training operation based on the target training set and the target test set to obtain a first detection model.
  10. 根据权利要求9所述的方法,所述对所述训练集和所述测试集进行第三预处理,得到目标训练集和目标测试集,包括:The method according to claim 9, wherein said performing third preprocessing on said training set and said test set to obtain a target training set and a target test set comprises:
    确定所述训练集中的各图像与所述摄像头的一一对应关系;Determine a one-to-one correspondence between each image in the training set and the camera;
    根据确定的所述对应关系,从所述训练集中获取各所述摄像头所对应的图像,得到对应的训练子集;Acquiring images corresponding to each of the cameras from the training set according to the determined correspondence relationship to obtain a corresponding training subset;
    根据每个所述训练子集包括的图像,确定对应摄像头的转换参数;Determine the conversion parameters of the corresponding camera according to the images included in each of the training subsets;
    根据所述转换参数,对所述训练集和所述测试集中相应摄像头所对应的图像进行预设转换处理,得到目标训练集和目标测试集。According to the conversion parameters, preset conversion processing is performed on the images corresponding to the corresponding cameras in the training set and the test set to obtain a target training set and a target test set.
  11. 一种风险检测装置,包括:A risk detection device, including:
    获取模块,其获取待检测的用户的生物特征图像组合,其中,所述生物特征图像组合包括:由多目摄像头对所述待检测用户的指定身体部位进行单次拍摄所得的多个图像;An acquiring module that acquires a combination of biometric images of the user to be detected, wherein the combination of biometric images includes: multiple images obtained by a single shot of a designated body part of the user to be detected by a multi-lens camera;
    第一训练模块,其通过预先训练的第一检测模型,对所述多个图像进行一致性检测,得到第一检测结果;以及,通过预先训练的第二检测模型,对所述多个图像进行活体检测,得到第二检测结果;A first training module, which detects the consistency of the plurality of images through a pre-trained first detection model to obtain a first detection result; and, uses a pre-trained second detection model to perform the consistency detection on the plurality of images Live body test, get the second test result;
    确定模块,其根据所述第一检测结果和所述第二检测结果,确定是否存在攻击风险。A determining module, which determines whether there is an attack risk based on the first detection result and the second detection result.
  12. 根据权利要求11所述的装置,The device according to claim 11,
    所述第一训练模块,对所述多个图像进行第一预处理,得到已处理图像;以及,The first training module performs first preprocessing on the multiple images to obtain processed images; and,
    将所述已处理图像输入至所述第一检测模型,以基于所述第一检测模型对所述已处理图像进行一致性检测。The processed image is input to the first detection model to perform consistency detection on the processed image based on the first detection model.
  13. 根据权利要求11或12所述的装置,所述装置还包括:训练模块;The device according to claim 11 or 12, further comprising: a training module;
    所述训练模块,获取所述多目摄像头采集的待训练的生物特征图像组合;以及,The training module obtains the biometric image combination to be trained collected by the multi-camera; and,
    对所述待训练的生物特征图像组合进行第二预处理,得到待训练的样本集;Performing second preprocessing on the biometric image combination to be trained to obtain a sample set to be trained;
    基于所述样本集训练所述第一检测模型。Training the first detection model based on the sample set.
  14. 根据权利要求13所述的装置,所述装置还包括:标定模块;The device according to claim 13, further comprising: a calibration module;
    所述标定模块,其对所述多目摄像头中的多个摄像头进行标定处理,得到转换矩阵;The calibration module, which performs calibration processing on a plurality of cameras in the multi-lens camera to obtain a conversion matrix;
    所述训练模块,根据所述转换矩阵,对所述待训练的生物特征图像组合中的图像进行空间对齐处理;以及,The training module performs spatial alignment processing on the images in the biometric image combination to be trained according to the conversion matrix; and,
    将所述空间对齐处理后的生物特征图像组合作为样本,并将所述样本划分为正样本和负样本;其中,所述正样本中的各图像一致,所述负样本中的各图像不一致;Taking the biometric image combination after the spatial alignment processing as a sample, and dividing the sample into a positive sample and a negative sample; wherein each image in the positive sample is consistent, and each image in the negative sample is not consistent;
    将所述正样本和所述负样本确定为待训练的样本集。The positive sample and the negative sample are determined as a sample set to be trained.
  15. 一种风险检测设备,包括:A risk detection equipment, including:
    处理器;以及,Processor; and,
    被安排成存储计算机可执行指令的存储器,所述计算机可执行指令在被执行时使所述处理器:A memory arranged to store computer-executable instructions which, when executed, cause the processor to:
    获取待检测用户的生物特征图像组合,其中,所述生物特征图像组合包括:由多目摄像头对所述待检测用户的指定身体部位进行单次拍摄所得的多个图像;Acquiring a biometric image combination of the user to be detected, where the biometric image combination includes: multiple images obtained by a single shot of a designated body part of the user to be detected by a multi-lens camera;
    通过预先训练的第一检测模型,对所述多个图像进行一致性检测,得到第一检 测结果;以及,通过预先训练的第二检测模型,对所述多个图像进行活体检测,得到第二检测结果;Perform consistency detection on the multiple images through a pre-trained first detection model to obtain a first detection result; and, through a pre-trained second detection model, perform a living detection on the multiple images to obtain a second Test results;
    根据所述第一检测结果和所述第二检测结果,确定所述待检测用户的活体检测是否存在受攻击的风险。According to the first detection result and the second detection result, it is determined whether the live detection of the to-be-detected user is at risk of being attacked.
  16. 一种存储介质,用于存储计算机可执行指令,所述计算机可执行指令在被执行时实现以下流程:A storage medium used to store computer-executable instructions that, when executed, implement the following processes:
    获取待检测用户的生物特征图像组合,其中,所述生物特征图像组合包括:由多目摄像头对所述待检测用户的指定身体部位进行单次拍摄所得的多个图像;Acquiring a biometric image combination of the user to be detected, where the biometric image combination includes: multiple images obtained by a single shot of a designated body part of the user to be detected by a multi-lens camera;
    通过预先训练的第一检测模型,对所述多个图像进行一致性检测,得到第一检测结果;以及,通过预先训练的第二检测模型,对所述多个图像进行活体检测,得到第二检测结果;Perform consistency detection on the multiple images through a pre-trained first detection model to obtain a first detection result; and, through a pre-trained second detection model, perform a living detection on the multiple images to obtain a second Test results;
    根据所述第一检测结果和所述第二检测结果,确定所述待检测用户的活体检测是否存在受攻击的风险。According to the first detection result and the second detection result, it is determined whether the live detection of the to-be-detected user is at risk of being attacked.
PCT/CN2020/124141 2019-12-13 2020-10-27 Risk detection method, apparatus and device WO2021114916A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911286075.X 2019-12-13
CN201911286075.XA CN111126216A (en) 2019-12-13 2019-12-13 Risk detection method, device and equipment

Publications (1)

Publication Number Publication Date
WO2021114916A1 true WO2021114916A1 (en) 2021-06-17

Family

ID=70498894

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/124141 WO2021114916A1 (en) 2019-12-13 2020-10-27 Risk detection method, apparatus and device

Country Status (2)

Country Link
CN (1) CN111126216A (en)
WO (1) WO2021114916A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569873A (en) * 2021-08-19 2021-10-29 支付宝(杭州)信息技术有限公司 Image processing method, device and equipment
CN114650186A (en) * 2022-04-22 2022-06-21 北京三快在线科技有限公司 Anomaly detection method and detection device thereof
CN115567371A (en) * 2022-11-16 2023-01-03 支付宝(杭州)信息技术有限公司 Abnormity detection method, device, equipment and readable storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126216A (en) * 2019-12-13 2020-05-08 支付宝(杭州)信息技术有限公司 Risk detection method, device and equipment
CN111539490B (en) * 2020-06-19 2020-10-16 支付宝(杭州)信息技术有限公司 Business model training method and device
CN112084915A (en) * 2020-08-31 2020-12-15 支付宝(杭州)信息技术有限公司 Model training method, living body detection method, device and electronic equipment
CN113569708A (en) * 2021-07-23 2021-10-29 北京百度网讯科技有限公司 Living body recognition method, living body recognition device, electronic apparatus, and storage medium
CN113850214A (en) * 2021-09-29 2021-12-28 支付宝(杭州)信息技术有限公司 Injection attack identification method and device for living body detection

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9202105B1 (en) * 2012-01-13 2015-12-01 Amazon Technologies, Inc. Image analysis for user authentication
CN106372601A (en) * 2016-08-31 2017-02-01 上海依图网络科技有限公司 In vivo detection method based on infrared visible binocular image and device
CN108446612A (en) * 2018-03-07 2018-08-24 腾讯科技(深圳)有限公司 vehicle identification method, device and storage medium
CN109600548A (en) * 2018-11-30 2019-04-09 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN110298230A (en) * 2019-05-06 2019-10-01 深圳市华付信息技术有限公司 Silent biopsy method, device, computer equipment and storage medium
CN110363087A (en) * 2019-06-12 2019-10-22 苏宁云计算有限公司 A kind of Long baselines binocular human face in-vivo detection method and system
CN111126216A (en) * 2019-12-13 2020-05-08 支付宝(杭州)信息技术有限公司 Risk detection method, device and equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7539330B2 (en) * 2004-06-01 2009-05-26 Lumidigm, Inc. Multispectral liveness determination
CN101110102A (en) * 2006-07-20 2008-01-23 中国科学院自动化研究所 Game scene and role control method based on fists of player
CN101393599B (en) * 2007-09-19 2012-02-08 中国科学院自动化研究所 Game role control method based on human face expression
CN101866427A (en) * 2010-07-06 2010-10-20 西安电子科技大学 Method for detecting and classifying fabric defects
CN103077521B (en) * 2013-01-08 2015-08-05 天津大学 A kind of area-of-interest exacting method for video monitoring
CN106897675B (en) * 2017-01-24 2021-08-17 上海交通大学 Face living body detection method combining binocular vision depth characteristic and apparent characteristic
CN110059644A (en) * 2019-04-23 2019-07-26 杭州智趣智能信息技术有限公司 A kind of biopsy method based on facial image, system and associated component

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9202105B1 (en) * 2012-01-13 2015-12-01 Amazon Technologies, Inc. Image analysis for user authentication
CN106372601A (en) * 2016-08-31 2017-02-01 上海依图网络科技有限公司 In vivo detection method based on infrared visible binocular image and device
CN108446612A (en) * 2018-03-07 2018-08-24 腾讯科技(深圳)有限公司 vehicle identification method, device and storage medium
CN109600548A (en) * 2018-11-30 2019-04-09 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN110298230A (en) * 2019-05-06 2019-10-01 深圳市华付信息技术有限公司 Silent biopsy method, device, computer equipment and storage medium
CN110363087A (en) * 2019-06-12 2019-10-22 苏宁云计算有限公司 A kind of Long baselines binocular human face in-vivo detection method and system
CN111126216A (en) * 2019-12-13 2020-05-08 支付宝(杭州)信息技术有限公司 Risk detection method, device and equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569873A (en) * 2021-08-19 2021-10-29 支付宝(杭州)信息技术有限公司 Image processing method, device and equipment
CN113569873B (en) * 2021-08-19 2024-03-29 支付宝(杭州)信息技术有限公司 Image processing method, device and equipment
CN114650186A (en) * 2022-04-22 2022-06-21 北京三快在线科技有限公司 Anomaly detection method and detection device thereof
CN115567371A (en) * 2022-11-16 2023-01-03 支付宝(杭州)信息技术有限公司 Abnormity detection method, device, equipment and readable storage medium
CN115567371B (en) * 2022-11-16 2023-03-10 支付宝(杭州)信息技术有限公司 Abnormity detection method, device, equipment and readable storage medium

Also Published As

Publication number Publication date
CN111126216A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
WO2021114916A1 (en) Risk detection method, apparatus and device
TWI714834B (en) Human face live detection method, device and electronic equipment
US10691923B2 (en) Face anti-spoofing using spatial and temporal convolutional neural network analysis
Fan et al. Identifying first-person camera wearers in third-person videos
WO2019137216A1 (en) Image filtering method and apparatus
US11132575B2 (en) Combinatorial shape regression for face alignment in images
WO2015149534A1 (en) Gabor binary pattern-based face recognition method and device
TW202011260A (en) Liveness detection method, apparatus and computer-readable storage medium
US9323989B2 (en) Tracking device
WO2021046715A1 (en) Exposure time calculation method, device, and storage medium
WO2018063608A1 (en) Place recognition algorithm
Loke et al. Indian sign language converter system using an android app
TWI719472B (en) Image acquisition method, device and system, electronic equipment and computer readable storage medium
TW201937400A (en) A living body detection method, device and apparatus
EP3349359A1 (en) Compressive sensing capturing device and method
WO2020252740A1 (en) Convolutional neural network, face anti-spoofing method, processor chip, and electronic device
WO2019015645A1 (en) Imaging processing method and device
WO2021135639A1 (en) Living body detection method and apparatus
US20190279022A1 (en) Object recognition method and device thereof
US11295416B2 (en) Method for picture processing, computer-readable storage medium, and electronic device
CN111160251B (en) Living body identification method and device
WO2022156214A1 (en) Liveness detection method and apparatus
EP3647997A1 (en) Person searching method and apparatus and image processing device
CN112102404B (en) Object detection tracking method and device and head-mounted display equipment
WO2018113206A1 (en) Image processing method and terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20898539

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20898539

Country of ref document: EP

Kind code of ref document: A1