WO2022206319A1 - 图像处理方法、装置、设备、存储介质计算机程序产品 - Google Patents

图像处理方法、装置、设备、存储介质计算机程序产品 Download PDF

Info

Publication number
WO2022206319A1
WO2022206319A1 PCT/CN2022/079872 CN2022079872W WO2022206319A1 WO 2022206319 A1 WO2022206319 A1 WO 2022206319A1 CN 2022079872 W CN2022079872 W CN 2022079872W WO 2022206319 A1 WO2022206319 A1 WO 2022206319A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
detection
detected
sample
detection result
Prior art date
Application number
PCT/CN2022/079872
Other languages
English (en)
French (fr)
Inventor
尹邦杰
姚太平
吴双
孟嘉
丁守鸿
李季檩
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2022206319A1 publication Critical patent/WO2022206319A1/zh
Priority to US17/989,254 priority Critical patent/US20230086552A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1382Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger
    • G06V40/1388Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger using image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • G06V40/1359Extracting features related to ridge properties; Determining the fingerprint type, e.g. whorl or loop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1382Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features

Definitions

  • the present application relates to the technical field of artificial intelligence, and in particular, to an image processing method, apparatus, computer equipment, storage medium and computer program product.
  • Liveness detection technology can use biometric information to verify whether the user is a real living body, and can effectively resist common attacks such as photos, face swaps, masks, and occlusions. Liveness detection includes face liveness detection, palm print liveness detection, iris liveness detection, and many more.
  • the texture features of some attack images are not so obvious, and even the naked eye is difficult to distinguish according to the texture of the attack image, so that only a single image texture information is used for living body.
  • the detection method has poor generalization on different types of attack images, and the detection accuracy is low.
  • An image processing method comprising:
  • the image to be detected including the biological features of the object to be detected
  • the first detection result indicates that the to-be-detected image belongs to a screen remake image, it is directly determined that the to-be-detected image has not passed the living body detection;
  • a biometric image is obtained based on the biometrics in the to-be-detected image, and the local frequency domain map corresponding to the biometric image is compared with the The biometric image is fused to obtain a local fusion image, and the local fusion image is subjected to in vivo detection to obtain a second detection result corresponding to the to-be-detected image, and according to the first detection result and the second detection result, A living body detection result corresponding to the to-be-detected image is determined.
  • a method for processing a living body detection model comprising:
  • the model performs live detection on the global fusion image, obtains a first detection result corresponding to the first sample image, determines a first loss based on the first detection result and the labeling category of the first sample image, and After adjusting the model parameters of the first model according to the first loss, return to the step of obtaining the first sample image in the first sample set to continue training until the global detection network is obtained when the training ends;
  • obtaining a second sample image in the second sample set obtaining a sample biometric image according to the biometrics of the second sample image, and fusing the local frequency domain map corresponding to the sample biometric image with the sample biometric image, Obtain a local fusion image, perform live detection on the local fusion image through a second model based on a neural network, and obtain a second detection result corresponding to the second sample image, based on the second detection result and the second sample
  • the labeling category of the image determines the second loss, and after adjusting the model parameters of the second model according to the second loss, return to the step of obtaining the second sample image in the second sample set to continue training until the end of the training obtaining the local detection network;
  • a living body detection model for performing living body detection on an image is obtained.
  • An image processing device comprising:
  • an acquisition module configured to acquire an image to be detected, where the image to be detected includes biological features of the object to be detected;
  • the global detection module is used to fuse the global frequency domain map corresponding to the image to be detected and the image to be detected to obtain a global fused image, perform in vivo detection on the global fused image, and obtain the first image corresponding to the image to be detected. a detection result; when the first detection result indicates that the to-be-detected image belongs to a screen remake image, it is directly determined that the to-be-detected image has not passed the living body detection;
  • the local detection module is configured to obtain a biometric image based on the biometrics in the to-be-detected image when the first detection result indicates that the to-be-detected image does not belong to the screen remake image, and to assign the corresponding biometric image to the biometric image.
  • the local frequency domain map is fused with the biometric image to obtain a local fused image, and in vivo detection is performed on the local fused image to obtain a second detection result corresponding to the to-be-detected image;
  • a determination module configured to determine a living body detection result corresponding to the to-be-detected image according to the first detection result and the second detection result.
  • a processing device for a living body detection model includes:
  • the global detection network acquisition module is used to acquire the first sample image in the first sample set, and a global fusion image obtained by fusing the global frequency domain map corresponding to the first sample image and the first sample image , by performing in vivo detection on the global fusion image based on the first model of the neural network, to obtain a first detection result corresponding to the first sample image, based on the first detection result and the first sample image.
  • the labeling category determines the first loss, and after adjusting the model parameters of the first model according to the first loss, return to the step of obtaining the first sample image in the first sample set to continue training until the end of the training obtaining the global detection network;
  • the local detection network acquisition module is used to acquire the second sample image in the second sample set, obtain the sample biometric image according to the biological characteristics of the second sample image, and compare the local frequency domain map corresponding to the sample biometric image with the sample biometric image.
  • the sample biometric images are fused to obtain a local fusion image, and the local fusion image is subjected to in vivo detection by a second model based on a neural network to obtain a second detection result corresponding to the second sample image, based on the second
  • the detection result and the labeling category of the second sample image determine a second loss, and after adjusting the model parameters of the second model according to the second loss, return the obtained second sample image in the second sample set.
  • the step is to continue training until the local detection network is obtained when the training ends;
  • the detection model obtaining module is configured to obtain a living body detection model for performing living body detection on an image according to the global detection network and the local detection network.
  • a computer device comprising a memory and one or more processors, the memory storing computer-readable instructions that, when executed by the one or more processors, cause the one or more processors
  • the processor implements the steps of the above-mentioned image processing method or the processing method of the living body detection model.
  • One or more non-volatile computer-readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to implement the above image processing A method or steps of a method of processing a living detection model.
  • a computer program comprising computer instructions stored in a computer-readable storage medium from which a processor of a computer device reads the computer instructions, the processor Executing the computer instructions causes the computer device to execute the steps of the image processing method or the processing method of the living body detection model.
  • Fig. 1 is the application environment diagram of the image processing method in one embodiment
  • FIG. 2 is a schematic flowchart of an image processing method in one embodiment
  • FIG. 3 is a schematic diagram of a flow of an image processing method in another embodiment
  • FIG. 4 is a schematic flowchart of obtaining a first detection result of palmprints in an image to be detected in one embodiment
  • FIG. 5 is a schematic flowchart of obtaining a second detection result of palmprints in an image to be detected in one embodiment
  • Fig. 6 is the frame schematic diagram of the palmprint living body detection process in one embodiment
  • FIG. 7 is a schematic flowchart of a training step of a global detection network in one embodiment
  • FIG. 8 is a schematic flowchart of a training step of a local detection network in one embodiment
  • FIG. 9 is a schematic flowchart of a method for processing a living body detection model in one embodiment
  • FIG. 10 is a schematic diagram of a framework of a training process of a living body detection model in a specific embodiment
  • FIG. 11 is a structural block diagram of an image processing apparatus in one embodiment
  • FIG. 12 is a structural block diagram of an apparatus for processing and training a living body detection model in one embodiment
  • Figure 13 is a diagram of the internal structure of a computer device in one embodiment.
  • the image processing method and the processing method of the living body detection model provided in this application realize the living body detection by using the computer vision technology and machine learning technology in the artificial intelligence technology (Artificial Intelligence, AI).
  • AI Artificial Intelligence
  • the images to be detected mentioned in the various embodiments of the present application are images to be subjected to living body detection.
  • Liveness detection is the process of determining the true biological characteristics of the object to be detected.
  • the image to be detected includes the biological features of the object to be detected, which can uniquely identify the object to be detected, including physiological features or behavioral features.
  • Physiological features include palm print, fingerprint, face, iris, hand shape, retina, auricle, etc.
  • behavioral features including gait, handwriting, etc.
  • the biological feature of the object to be detected may be any one or more of the above biological features.
  • the computer device acquires the image to be detected, and the image to be detected includes the biological characteristics of the object to be detected; the global frequency domain map corresponding to the image to be detected and the global fusion image obtained by fusion of the image to be detected are subjected to in vivo detection to obtain the image to be detected.
  • the first detection result corresponding to the image when the first detection result indicates that the to-be-detected image belongs to the screen remake image, it is directly determined that the to-be-detected image has not passed the living body detection; when the first detection result indicates that the to-be-detected image does not belong to the screen remake image, Then, a biometric image is obtained based on the biometrics in the image to be detected, and a local fusion image obtained by fusing the local frequency domain map corresponding to the biometric image with the biometric image is subjected to in vivo detection, and a second detection result corresponding to the image to be detected is obtained, And according to the first detection result and the second detection result, the living body detection result corresponding to the image to be detected is determined.
  • the image processing method provided in this application can be applied to the application environment shown in FIG. 1 .
  • the terminal 102 communicates with the living body detection server 104 through the network. Specifically, the terminal 102 may acquire the image to be detected, and the image to be detected includes the palmprint of the object to be detected; perform in vivo detection on the global fused image obtained by fusing the global frequency domain map corresponding to the image to be detected and the image to be detected, to obtain the image to be detected
  • the first detection result of the palm print in the image when the first detection result indicates that the image to be detected belongs to the screen remake image, it is directly determined that the to-be-detected image has not passed the living body detection; when the first detection result indicates that the to-be-detected image does not belong to the screen remake image
  • the palmprint image is obtained based on the palmprint area in the image to be detected; the local fused image obtained by fusing the local frequency domain map corresponding to the palmprint image with the palmprint image is subjected to in vivo
  • the terminal 102 can acquire the image to be detected; send the image to be detected to the living body detection server 104, and the living body detection server 104 fuses the global frequency domain map corresponding to the to-be-detected image with the global fusion obtained by fusing the to-be-detected image
  • the image is subjected to in vivo detection, and the first detection result of the palmprint in the image to be detected is obtained; when the first detection result indicates that the to-be-detected image belongs to the screen remake image, it is directly determined that the to-be-detected image has not passed the in vivo detection;
  • the palmprint image is obtained based on the palmprint region in the image to be detected; the local fusion image obtained by fusing the local frequency domain map corresponding to the palmprint image and the palmprint image is subjected to in vivo detection, A second detection result of the palmprint in the image to be detected is obtained; according to the first detection result and the second
  • the terminal 102 can be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, portable wearable devices, vehicle-mounted terminals, etc.
  • the living body detection server 104 can be implemented by an independent server or a server cluster composed of multiple servers. .
  • the artificial neural network-based living detection model can be obtained by training a computer device, and the computer device can acquire the first sample image in the first sample set, and the first model based on the neural network
  • the global frequency domain map corresponding to the sample image is fused with the global fusion image obtained by the first sample image to perform in vivo detection, and the first detection result of the palmprint in the first sample image is obtained.
  • the first loss determined by the labeling category of the sample image after adjusting the model parameters of the first model, returns to the step of obtaining the first sample image in the first sample set and continues training until the global detection network is obtained at the end of the training;
  • the sample palmprint image is obtained according to the palmprint area of the second sample image, and the local frequency domain map corresponding to the sample palmprint image is compared with the sample palmprint image through the second model based on neural network.
  • the obtained partial fusion image is subjected to in vivo detection to obtain a second detection result of palmprints in the second sample image, and the second model is adjusted according to the second loss determined based on the second detection result and the labeling category of the second sample image.
  • the computer equipment used to train the living body detection model can be a terminal or a server.
  • the palmprint verification device when the palmprint verification device needs to authenticate the user, the palmprint image of the user can be collected, and the global detection network corresponding to the image to be detected can be detected by the global detection network in the trained living body detection model.
  • the global fusion image obtained by fusing the frequency domain map and the image to be detected is used for in vivo detection, and the first detection result of the palmprint in the image to be detected is obtained.
  • the local detection network in the trained in vivo detection model is based on the palmprint in the image to be detected.
  • the palmprint image is obtained in the region; the local fused image obtained by fusing the local frequency domain map corresponding to the palmprint image and the palmprint image is subjected to in vivo detection, and the second detection result of the palmprint in the image to be detected is obtained.
  • the first detection result With the second detection result, determine the in vivo detection result of the palmprint of the object to be detected; when the in vivo detection result indicates that the palmprint in the to-be-detected image is a living palmprint, then the palmprint in the to-be-detected image is identified by palmprint identification to obtain Palmprint recognition result; perform identity authentication on the object to be detected according to the palmprint recognition result.
  • the palmprint verification device may be a mobile phone, a palmprint verification machine, or other devices with an image acquisition device.
  • an image processing method is provided, and the method is applied to the computer device (terminal 102 or server 104) in FIG. 1 as an example for description, including the following steps:
  • Step 202 Acquire an image to be detected, where the image to be detected includes the palm print of the object to be detected.
  • the image to be detected is an image to be subjected to palmprint biometric detection, and the image to be detected includes the palmprint of the object to be detected.
  • the palmprint in the image to be detected may be a part of the palmprint of the object to be detected, or it may be the palm All tattoos.
  • Palmprint living detection is a method of identifying whether the image is an image collected from a real living object to be detected according to the palmprint in the image. It is usually used to determine whether the object to be detected includes a real living body, and is often used in identity verification. In the scene, it can effectively resist the attack of remake images.
  • the computer equipment can instantly collect images through a local image collection device, and use the collected images as the images to be detected, for example, the palm images of the user collected by the terminal 102 in FIG.
  • the image to be detected can also be an image derived from a computer device locally, and the image is taken as an image to be detected.
  • the image to be derived locally can be a photo that includes palmprints taken in advance or a stored photo that includes palmprints, but it is understandable that , locally exported images usually do not pass palmprint liveness detection.
  • the image to be detected collected by the computer equipment may be an image focusing on the external features of the palm, such as palm lines, wrinkles, etc., and the image to be detected may also be focused on the palm structure and internal features. Internal features such as veins, bones, soft tissue, etc.
  • the to-be-detected image of the living palm print may be obtained by non-contact acquisition.
  • the user stretches out the palm in the air, and the computer device scans and collects the palm image through the camera to obtain the to-be-detected image.
  • the image to be detected may be acquired by contact.
  • the palmprint verification device is provided with a palmprint acquisition touch screen, the user can place the palm on the palmprint acquisition touchscreen, and the palmprint verification device can The palm collects an image as an image to be detected.
  • Step 204 Perform in vivo detection on the global fused image obtained by fusing the global frequency domain map corresponding to the image to be detected and the image to be detected, to obtain a first detection result of palmprints in the image to be detected.
  • the image to be detected is an image that reflects the texture distribution characteristics of the image in the spatial domain (also called the time domain), and is data representing the grayscale distribution characteristics of the image to be detected in the spatial domain.
  • the image to be detected includes three channels of grayscale. Information, each channel can be represented by the grayscale or intensity of the image to be detected at each pixel.
  • the image to be detected is an RGB image
  • the RGB image is a three-channel color pixel map
  • the three-channel color pixel map respectively corresponds to the red, green, and blue components of each pixel in the RGB image.
  • Frequency domain transformation is the process of converting the image to be detected from grayscale distribution to distribution in frequency domain, and the obtained global frequency domain map represents the characteristics of the entire image to be detected in the frequency domain.
  • the frequency domain transform can be fast Fourier transform, wavelet transform or Laplace transform, etc.
  • the global frequency domain map is a feature map that reflects the overall frequency domain distribution characteristics of the image to be detected. By converting the image to be detected from grayscale distribution to frequency domain distribution, the characteristics of the image to be detected can be observed from the overall frequency domain distribution.
  • the global representation is the image information of the entire image to be detected for distinguishing, and the palmprint area in the image to be detected mentioned below is the local image information in the image to be detected.
  • the global frequency domain map is a frequency domain map obtained by performing frequency domain transformation on the entire image to be detected.
  • the high-frequency components in the global frequency domain map represent the part where the gray value of the image to be detected has changed abruptly, which corresponds to the detailed information in the image to be detected, and the low-frequency component represents the average gray value in the image to be detected.
  • the part corresponding to the contour information in the image to be detected.
  • the global fusion image is a multi-channel image obtained by fusing the image to be detected with the global frequency domain map.
  • the global fusion image carries the image characteristics of the image to be detected in the spatial and frequency domains.
  • the obtained global fusion image is then subjected to in vivo detection, and the combination of these two different image information makes the detection accuracy higher , can adapt to a variety of different scenarios, and the acquisition of the image to be detected does not require specific hardware equipment, only one image to be detected can achieve a good detection effect.
  • the recaptured images of different attack types have different characteristics.
  • the palmprint image of the high-definition paper remake it is quite different from the live palmprint image in the palmprint area. Therefore, it is necessary to pay more attention to the local information of the cropped palmprint area of the to-be-detected image in order to obtain more accurate and accurate results.
  • the texture differences of real palm prints will be introduced below.
  • the computer device performs frequency domain transformation processing on the image to be detected to obtain a global frequency domain map; and fuses the to-be-detected image with the global frequency domain map to obtain a global fused image.
  • the frequency domain transform can be fast Fourier transform, wavelet transform or Laplace transform, etc.
  • the computer equipment can perform fast Fourier transform on the image to be detected to obtain a global frequency domain image, and then fuse the three-channel image to be detected with the global frequency domain image by channel to obtain a four-channel image as a global fusion image.
  • the computer device obtains the original captured image, adjusts the captured image to a first preset size to obtain an image to be detected, and then performs frequency domain transformation processing on the to-be-detected image to obtain a global frequency domain map.
  • the computer device scales the original captured image to a preset size, for example, the preset size may be 224*224*3, obtains the image to be detected, and then performs fast Fourier transformation on the image to be detected The transformation process is performed to obtain a global frequency domain image, and then the to-be-detected image and its corresponding global frequency domain image are fused together to form a four-channel image, which is used as a global fusion image.
  • a preset size for example, the preset size may be 224*224*3
  • the transformation process is performed to obtain a global frequency domain image, and then the to-be-detected image and its corresponding global frequency domain image are fused together to form a four-channel image, which is used as a global fusion image.
  • the computer device may use a trained global detection network based on a neural network to perform live detection on the global fusion image to obtain a first detection result of palmprints in the image to be detected.
  • the first detection result is one of whether the image to be detected passes the palmprint in vivo detection or fails the palmprint in vivo detection, that is, whether the palmprint of the object to be detected in the to-be-detected image is a living palmprint.
  • the global fusion image carries all the texture information of the image to be detected and the global characteristics of the image to be detected in the frequency domain, the combination of these two different image information makes the detection accuracy higher and can be adapted to a variety of different scenarios, especially High-definition screen remake images can have a good detection effect.
  • Step 206 when the first detection result indicates that the to-be-detected image belongs to the screen duplication image, it is directly determined that the to-be-detected image has not passed the living body detection.
  • Step 208 when the first detection result indicates that the image to be detected does not belong to the screen remake image, then the palmprint image is obtained based on the palmprint region in the image to be detected, and the local frequency domain map corresponding to the palmprint image is fused with the palmprint image.
  • the obtained partial fusion image is subjected to in vivo detection to obtain a second detection result of the palmprint in the image to be detected, and the in vivo detection result of the palmprint of the object to be detected is determined according to the first detection result and the second detection result.
  • the attack types of live detection can be roughly divided into two categories, one is the high-definition screen remake image mentioned above, which can be judged based on the global information of the image to be detected, and the other is the high-definition paper remake image.
  • the computer equipment is set up with two-level detection.
  • the palmprint area in the image to be detected is further processed to be more relevant to the local information in the image to be detected, that is, the image information of the area where the palmprint is located.
  • the palm print area is the area where the palm print is located.
  • the palm print area can be the area where the entire palm is located, or the area where the palm is located.
  • the palm print image obtained according to the palm print area can be the palm print surrounding the image to be detected. obtained by scaling the rectangular area.
  • the computer device can extract the palmprint area in the image to be detected, and crop the image to be detected according to the palmprint area to obtain the palmprint image.
  • the computer device may use a palmprint extraction tool to perform matting processing on the image to be detected to obtain a palmprint area, and then zoom the palmprint area to a preset size to obtain a palmprint image.
  • the computer device may perform palmprint detection on the image to be detected to determine a palmprint region in the image to be detected; crop the image to be detected according to the palmprint region to obtain a palmprint image.
  • the computer device may acquire the original captured image, perform palmprint detection on the captured image, and determine the palmprint region in the captured image; after cropping the palmprint region from the captured image, adjust the palmprint region to the first 2. Preset size to obtain palmprint image.
  • the palmprint image is an image that reflects the texture distribution characteristics of palmprints in the spatial domain (also called the time domain), and is the data representing the grayscale distribution characteristics of the palmprint image in the spatial domain.
  • the palmprint image includes three channels of grayscale. Each channel can be represented by the grayscale or intensity at each pixel of the palmprint image.
  • the palmprint image is an RGB image
  • the RGB image is a three-channel color pixel map
  • the three-channel color pixel map respectively corresponds to the red, green, and blue components of each pixel in the RGB image.
  • Frequency domain transformation is the process of converting palmprint image from gray distribution to frequency domain distribution, and the obtained global frequency domain map represents the characteristics of the entire palmprint image in frequency domain.
  • the frequency domain transform can be fast Fourier transform, wavelet transform or Laplace transform, etc.
  • the palmprint area is the local image information in the image to be detected, so the local frequency domain map is a feature map that reflects the frequency domain distribution characteristics of the palmprint area in the image to be detected.
  • the image is converted from grayscale distribution to frequency domain distribution, and the characteristics of palmprint images can be observed from the overall frequency domain distribution.
  • the local frequency domain map is a frequency domain map obtained by performing frequency domain transformation on the palmprint image. Among them, the high-frequency components in the local frequency domain map represent the part of the palmprint image where the gray value changes abruptly, which corresponds to the detailed information in the palmprint image, and the low-frequency component represents the average gray value in the palmprint image. , which corresponds to the contour information in the palmprint image.
  • the local fusion image is a multi-channel image obtained by fusing the palmprint image with the local frequency domain image. The local fusion image carries the image characteristics of the palmprint image in the spatial and frequency domains.
  • the computer device performs frequency domain transformation processing on the palmprint image to obtain a local frequency domain map; and fuses the palmprint image with the local frequency domain map to obtain a local fused image.
  • the frequency domain transform may be fast Fourier transform, wavelet transform, or Laplace transform, or the like.
  • the computer equipment can perform fast Fourier transform on the palmprint image to obtain a local frequency domain image, and then fuse the three-channel palmprint image and the local frequency domain image by channel to obtain a four-channel image as a local fusion image.
  • the computer device obtains the original captured image, and after cropping the palmprint region from the captured image, adjusts the palmprint region to a second preset size, for example, the second preset size may be 122*122 *4, the palmprint image is obtained, and the palmprint image and its corresponding local frequency domain map are fused together to form a four-channel image, which is used as a local fusion image.
  • the second preset size may be 122*122 *4
  • the palmprint image is obtained, and the palmprint image and its corresponding local frequency domain map are fused together to form a four-channel image, which is used as a local fusion image.
  • the computer device may use a trained local detection network based on a neural network to perform live detection on the local fusion image to obtain a second detection result of palmprints in the image to be detected.
  • the second detection result is one of whether the palmprint image passes the palmprint in vivo detection or fails the palmprint in vivo detection, that is, whether the palmprint of the object to be detected in the to-be-detected image is a live palmprint.
  • the local fusion image carries the texture information of the palmprint in the image to be detected and the characteristics of the palmprint in the frequency domain in the image to be detected, the combination of these two different image information makes the detection more accurate and can adapt to a variety of different scenarios , especially the high-definition paper remake images can have a good detection effect.
  • step 204 and step 208 are two independent steps, and their execution order may be reversed or may be executed in parallel.
  • the first detection result obtained by the computer equipment is the detection result obtained by using the global information of the image to be detected for living body detection
  • the second detection result is obtained by using the local information in the to-be-detected image to perform live detection. It is more accurate to determine the living body detection result of the palm print in the image to be detected by combining the two detection results.
  • the first detection result and the second detection result both represent the probability that the palmprint in the image to be detected belongs to the palmprint in the remake image
  • both the first detection result and the second detection result may indicate that the palmprint in the image to be detected belongs to Probability of living palm prints.
  • determining the in vivo detection result of the palmprint of the object to be detected according to the first detection result and the second detection result includes: when the first detection result indicates that the palmprint of the object to be detected belongs to the palmprint image of the screen remake When the first probability of palmprint is lower than the first threshold, the second detection result is obtained, and the second detection result indicates that the palmprint of the object to be detected belongs to the second palmprint in the palmprint image of the paper remake. Probability; when the second probability is less than the second threshold, it is determined that the palm print of the object to be detected is a living palm print.
  • the texture of the palmprint image replayed by the screen is not obvious enough, when the first detection result indicates that the first probability that the palmprint in the image to be detected belongs to the palmprint in the palmprint image replayed by the screen is smaller than the first threshold, it means that the image to be detected is not It belongs to the screen remake image, which may be a living palmprint image.
  • the second detection result obtained by performing liveness detection with more attention to local information when the second detection result indicates that the palmprint in the image to be detected belongs to the palmprint image that is reproduced from a piece of paper
  • the second probability of the palm print is smaller than the second threshold, it can be more certain that the palm print in the image to be detected is a living palm print, and the image to be detected is detected by living body.
  • the first threshold and the second threshold may be the same value or different values, for example, both the first threshold and the second threshold may be 0.4.
  • the above method further includes: when the first probability is greater than the first threshold, determining that the palm print of the object to be detected is the palm print in the palm print image of the screen remake; when the second probability is greater than the second threshold, determining Then, it is determined that the palm print of the object to be detected is the palm print in the palm print image reproduced by the piece of paper.
  • the first probability when the first probability is greater than the first threshold, it can be directly determined that the palmprint in the image to be detected is the palmprint in the palmprint image that is re-photographed by the screen, and the image to be detected cannot pass the in vivo detection, and when the first probability is less than
  • the second detection result obtained by performing living body detection is further based on paying more attention to local information.
  • the probability is greater than the second threshold, it is determined that the palm print of the object to be detected is the palm print in the palm print image reproduced with a piece of paper, and the image to be detected fails the living body detection.
  • determining the in vivo detection result of the palmprint of the object to be detected according to the first detection result and the second detection result includes: when the first detection result indicates that the palmprint of the object to be detected belongs to the first in vivo palmprint Probability, when the second detection result indicates that the palmprint of the object to be detected belongs to the second probability of living palmprint, if the first probability is less than the first threshold, the image to be detected is directly determined to be a remake of the palmprint image, if the first probability is greater than The first threshold further checks whether the second probability is greater than the second threshold. If the second probability is less than the second threshold, it is directly determined that the image to be detected is a retaken palmprint image, and if the second probability is also greater than the second threshold, it is finally determined.
  • the detected image is a living palm print image.
  • FIG. 3 it is a schematic flowchart of an image processing method in one embodiment.
  • the computer equipment uses the image to be detected to obtain the palmprint image, and divides the image to be detected and the palmprint image into two parallel paths for in vivo detection to obtain their corresponding frequency domain maps respectively, and then divides each The input image of one channel is fused with the corresponding frequency domain map, and then the in vivo detection is performed to obtain the corresponding detection results.
  • the above method further includes: when the living body detection result indicates that the palmprint in the image to be detected is a living palmprint, performing palmprint recognition on the palmprint in the to-be-detected image to obtain a palmprint recognition result;
  • the fingerprint recognition result is used to authenticate the object to be detected.
  • the image to be detected may be an image to be subjected to palmprint recognition, and the palmprint biometric detection step provided in the embodiment of the present application may be directly deployed before the palm recognition step.
  • the palmprint recognition process when the palmprint recognition passes, it is determined that the image to be detected has passed the identity authentication; when the image to be detected does not pass the palmprint biometric detection, an error can be reported and a retry can be prompted.
  • the palmprint living detection can be applied to scenarios such as online palmprint payment, offline palmprint payment, palmprint access control unlocking system, mobile phone palmprint recognition, and automatic palmprint recognition.
  • the above image processing method uses the fusion image obtained by fusing the image frequency domain information and the texture information of the image itself to perform live detection.
  • scene, and the acquisition of the image to be detected does not require specific hardware equipment, it can have good performance in different lighting environments, so it has higher universality; on the other hand, in order to overcome different types of attacks Because of different characteristics, a single model cannot adapt to different attack types.
  • Two-level detection is adopted, that is, the global detection for the entire image and the local detection for the palmprint area, so that different attack types can be better.
  • directional optimization can be performed on any one of the detection processes without affecting the detection effect of the other detection process.
  • in vivo detection is performed on a global fused image obtained by fusing the global frequency domain map corresponding to the image to be detected and the image to be detected, to obtain a first detection result of palmprints in the image to be detected, include:
  • Step 402 Input the global fusion image into the trained living body detection model.
  • the living body detection model is a machine learning model that the computer equipment learns in advance through a plurality of sample images, so as to have the ability to perform palmprint living body detection on the images.
  • the computer equipment used for training can be a terminal or a server.
  • the living body detection model can be implemented using a neural network model, such as a convolutional neural network model.
  • the live detection model includes a global detection network and a local detection network. By combining the global features of the image to be detected and the palmprint region features, a two-level network is used for judgment. It should be noted that the detection sequence of the two-level detection networks may be arbitrary, which is not limited in the embodiment of the present application.
  • the computer device may set the respective model structures of the two-level networks in advance to obtain their respective initial neural network models, and then train the initial neural network models through sample images and corresponding label categories to obtain the respective trained neural network models. model parameters.
  • the computer equipment can obtain the respective model parameters that have been trained in advance, import the model parameters into the model structure of the respective neural network model set in advance, and use the two-level detection network according to the model structure. Obtain the live detection model.
  • Step 404 extract the image features of the global fusion image through the global detection network in the living body detection model, and output the probability that the palmprint of the object to be detected in the image to be detected belongs to the palmprint in the screen-reproduced palmprint image based on the image feature, as The first test result.
  • the living body detection model includes a global detection network and a local detection network.
  • Both the global detection network and the local detection network may be network structures implemented based on convolutional neural networks.
  • the global detection network and the local detection network are trained separately due to the different sizes of the processed images.
  • the global detection network is used to perform palmprint in vivo detection on the image to be detected, and the first detection result corresponding to the to-be-detected image is obtained, and the local detection network is used to perform palmprint in vivo detection on the palmprint image extracted from the to-be-detected image, to obtain the to-be-detected image.
  • the second detection result corresponding to the detection image is detected.
  • Image features are the features or characteristics that distinguish an image from other images, or a collection of these characteristics and characteristics, which is an image description quantity used to describe an image.
  • each image has its own characteristics that can be distinguished from other types of images, such as brightness, edge, texture and color, etc. Some need to be transformed or processed, such as spectrum, histogram, and main ingredients, etc.
  • the global fusion image is a four-channel image obtained by fusing the image to be detected and its corresponding global frequency domain map. The image features of the global fusion image are hidden in the four-channel matrix.
  • the computer equipment can extract the image from the image to be detected through the global detection network.
  • the extracted image features not only need to be able to describe the original image to be detected well, but also need to be able to distinguish the image to be detected from other images well. As far as the extracted image features are concerned, the difference between the live palmprint images is small, while the difference between the live palmprint image and the reprinted palmprint image is large.
  • the embodiments of the present application do not limit the network structures inside the global detection network and the local detection network, and the designer can set them according to actual needs, as long as the global detection network and the local detection network can realize the palmprint living detection on the image. That's it.
  • both the global detection network and the local detection network can use Resnet18 as the network backbone. Resnet18 has good classification performance, and its not too deep network layers also ensure the timeliness of forward inference.
  • in vivo detection is performed on a local fusion image obtained by fusing a local frequency domain map corresponding to the palmprint image with the palmprint image, and a second detection result of the palmprint in the image to be detected is obtained, include:
  • Step 502 Input the local fusion image into the trained living body detection model.
  • the local fusion image is an image obtained by fusion of the palmprint image of the image to be detected and the local frequency domain map corresponding to the palmprint image, and is an image that pays more attention to the local information of the palmprint region in the image to be detected.
  • the computer equipment can input it into the trained living body detection model, the local detection network in the living body detection model, and perform live body detection on it, so as to realize the secondary detection of the image to be detected.
  • Step 504 extracting the image features of the local fusion image through the local detection network in the living body detection model, and outputting the probability that the palmprint of the object to be detected in the image to be detected belongs to the palmprint in the palmprint image of the paper remake based on the image feature, as the second detection result.
  • the computer device can set the first model and the second model based on the neural network, and after the model training is performed on the first model and the second model through sample images, respectively, obtain the trained global detection model.
  • the network and the local detection network are cascaded to obtain the trained palmprint in vivo detection.
  • FIG. 6 which is a schematic diagram of the framework of an image processing method in one embodiment, referring to FIG. 6 , a computer device acquires an image to be detected for palmprint biometric detection, and adjusts it to 224*224* 3-sized RGB image, and then perform frequency domain transformation on it to obtain the corresponding global frequency domain map, and then connect the RGB image of the image to be detected and the global frequency domain map together to form a 224*224*4 4-channel map, That is, the global fusion image is input to the global detection network to obtain the first detection result of the image to be detected through the palmprint living detection.
  • the final judgment logic of the to-be-detected image through the palmprint in vivo detection is as follows: If the first detection result ⁇ 0.4, then Directly determine that the to-be-detected image is a replayed palmprint image, and if the first detection result is greater than 0.4, check whether the second detection result is less than 0.4, and if so, directly determine that the to-be-detected image is a recaptured palmprint image; 0.4, the image to be detected is finally determined to be a living palmprint image.
  • the above image processing method further includes a training step of the model, which specifically includes: acquiring a first sample set and a second sample set, the samples in the first sample set and the second sample set
  • the image includes palm print; use the first sample image in the first sample set to perform model training on the first model based on the neural network to obtain a global detection network; obtain according to the palm print area of the second sample image in the second sample set
  • the sample palmprint image is used to train the second model based on the neural network by using the sample palmprint image to obtain a local detection network.
  • the first model and the second model are pre-set model structures, and their model parameters are the initial model parameters.
  • the initial model parameters are updated through continuous training, so as to obtain the trained model parameters, and import the trained model parameters into the same model.
  • the model of the framework obtains a global detection network and a local detection network with palmprint living detection capability, thereby obtaining a living body detection model. It should be noted that the global detection network and the local detection network can be deployed on the same computer device, or can be deployed separately, so that the images to be detected can be detected in parallel and the detection efficiency can be improved.
  • the labeling category of the first sample image in the first sample set is one of a living palmprint image and a palmprint image that is reproduced on the screen
  • the labeling category of the second sample image in the second sample set is living body One of palm print images and paper remakes of palm print images.
  • the computer device may first perform palmprint detection on the second sample image to determine a palmprint area in the second sample image; crop the second sample image according to the palmprint area to obtain a sample palmprint image.
  • the computer device may acquire the first sample set and the second sample set, and use the first sample image in the first sample set to adjust the model parameters of the first model.
  • Each training sample includes a first sample image and an annotation category corresponding to the first sample image.
  • each first sample image and the corresponding label category are used as input in turn, the first sample image is input into the first model for processing, and the processing result output by the current model and the first sample image are processed.
  • the model parameters are adjusted based on the loss constructed by the labeling category of , and then the next training sample is processed based on the adjusted model parameters, and it is repeated continuously until a trained global detection network is obtained.
  • model parameters of the second model are adjusted by using the second sample image in the second sample set, where each training sample in the second sample set includes a second sample image and a label corresponding to the second sample image
  • each second sample image and the corresponding label category are used as input in turn, and a sample palmprint image is obtained according to the second sample image, and the sample palmprint image is input into the second model for processing.
  • the processing result output by the model and the loss constructed by the labeling category of the second sample image adjust the model parameters, and then process the next training sample based on the adjusted model parameters, and repeat until a trained local detection network is obtained.
  • the training steps of the global detection network include steps 702 to 706:
  • Step 702 Perform in vivo detection on the global fusion image obtained by fusing the global frequency domain map corresponding to the first sample image with the first sample image by using the first model, and obtain a first detection result of palmprints in the first sample image.
  • the computer device may perform frequency domain transformation processing on the first sample image to obtain a global frequency domain map corresponding to the first sample image; and fuse the first sample image with the global frequency domain map to obtain a global fused image.
  • the computer device performs in vivo detection on the global fusion image through the first model, and obtains a first detection result of the palmprint in the first sample image.
  • the computer device can extract the image features of the global fusion image through the first model, and output the probability that the palmprint in the first sample image belongs to the palmprint in the screen-reproduced palmprint image based on the image feature, as the first detection method. result.
  • the computer device can adjust the first sample image to the first preset size, and then perform frequency domain transformation processing on the adjusted image to obtain a global frequency domain map, and compare the adjusted image with the global frequency domain. Image fusion to obtain a global fusion image.
  • Step 704 Determine the first loss according to the first detection result and the labeling category of the first sample image.
  • the labeling category of the first sample image is one of a living palm print and a non-living palm print, which can be represented by 0 or 1.
  • the corresponding labeling category can be represented by 1
  • the corresponding labeling class can be represented by 0.
  • the first detection result may be the probability that the palmprint in the first sample image belongs to the palmprint in the palmprint image that is reproduced on the screen.
  • a first loss is determined based on the difference between the two.
  • the first loss may be a cross-entropy loss.
  • Step 706 Continue training after adjusting the model parameters of the first model according to the first loss, until the global detection network is obtained when the training ends.
  • the first loss is used to adjust the first model in the direction of reducing the difference between the labeling category of the first sample image and the first detection result, so as to ensure that the trained global detection network has the ability to process the image to be detected. Accuracy of stria biopsies.
  • the terminal may use a stochastic gradient descent algorithm to reduce the model parameters in the direction of reducing the difference between the labeling category corresponding to the first sample image and the first detection result Adjustments are made so that a global detection network capable of accurate living detection can be obtained after multiple adjustments.
  • the training steps of the local detection network include steps 802 to 806:
  • Step 802 perform in vivo detection on a local fused image obtained by fusing the local frequency domain map corresponding to the sample palmprint image with the sample palmprint image by using the second model, and obtain a second detection result of the palmprint in the second sample image.
  • the computer device may perform frequency domain transformation processing on the sample palmprint image to obtain a corresponding local frequency domain map; and fuse the sample palmprint image with the local frequency domain map to obtain a local fusion image.
  • the computer device performs live detection on the local fusion image through the second model, and obtains a second detection result of the palm print in the second sample image.
  • the computer device can extract the image features of the local fusion image through the second model, and based on the image features, output the probability that the palmprint in the second sample image belongs to the palmprint in the palmprint image of the paper remake, as the second detection. result.
  • the computer device can adjust the sample palmprint image to the second preset size, and then perform frequency domain transformation processing on the adjusted image to obtain a global frequency domain map, and compare the adjusted image with the global frequency domain map. Fusion to obtain a global fused image.
  • Step 804 Determine the second loss according to the second detection result and the labeling category of the second sample image.
  • the labeling category of the second sample image is one of a living palmprint and a non-living palmprint, which can be represented by 0 or 1.
  • the corresponding labeling category can be represented by 1
  • the corresponding labeling class can be represented by 0.
  • the second detection result may be the probability that the palm prints in the second sample image belong to the palm prints in the palm print image reprinted by the paper, and the terminal may obtain the labeling category of the second sample image and the second detection result obtained by performing living body detection through the second model As a result, a second loss is determined based on the difference between the two.
  • the second loss may be a cross-entropy loss.
  • Step 806 Continue training after adjusting the model parameters of the second model according to the second loss, until the local detection network is obtained when the training ends.
  • the second loss is used to adjust the second model in the direction of reducing the difference between the labeling category of the second sample image and the second detection result, so as to ensure that the local detection network obtained by training has the ability to perform palmprint living detection on the image to be detected. accuracy.
  • the terminal may use a stochastic gradient descent algorithm to reduce the model parameters in the direction of reducing the difference between the labeling category corresponding to the second sample image and the second detection result. Adjustment, so that after multiple adjustments, a local detection network that can accurately perform live detection can be obtained.
  • a method for processing a living body detection model is provided, and the method is applied to a computer device as an example for description, including the following steps:
  • Step 902 Obtain a first sample image in the first sample set, and perform a global fusion image obtained by fusing a global frequency domain map corresponding to the first sample image with the first sample image by a first model based on a neural network. Perform living body detection, obtain the first detection result of the palmprint in the first sample image, and adjust the model parameters of the first model according to the first loss determined based on the first detection result and the labeling category of the first sample image, and then return The step of obtaining the first sample image in the first sample set continues the training until the global detection network is obtained when the training ends.
  • Step 904 obtain the second sample image in the second sample set, obtain the sample palmprint image according to the palmprint area of the second sample image, and compare the local frequency domain map corresponding to the sample palmprint image by the second model based on the neural network.
  • the local fusion image obtained by fusion with the sample palmprint image is subjected to in vivo detection, and the second detection result of the palmprint in the second sample image is obtained, and the second loss is determined based on the second detection result and the labeling category of the second sample image.
  • After adjusting the model parameters of the second model return to the step of obtaining the second sample image in the second sample set and continue training until the local detection network is obtained when the training ends.
  • Step 906 Obtain a living body detection model for performing palmprint living body detection on the image according to the global detection network and the local detection network.
  • the global detection network and the local detection network in the neural network-based living body detection model are obtained through independent training, and the global detection network can fuse the image frequency domain information with the texture information of the image itself. Fusion images are used for palmprint living detection, and the combination of these two different image global information makes the detection more accurate, and can adapt to a variety of different scenarios, such as the detection of palmprint images on screen remakes; the local detection network can The fused image obtained by fusing the frequency domain information of the palmprint region with the texture information of the palmprint region itself is used to detect the living body of the palmprint. Since more attention is paid to the local information of the image, the detection accuracy can be further improved.
  • FIG. 10 it is a schematic diagram of the framework of the training process of the living body detection model in a specific embodiment.
  • the computer equipment sets CNN1 and CNN2.
  • the input size is 224*224*4
  • the specific implementation method is: firstly adjust the sample images (including real-life palmprint images and high-definition screen remakes image) to 224*224*3, and then use the fast Fourier transform to calculate the corresponding frequency domain maps respectively, the size is 224*224*1, and connect their RGB maps and frequency domain maps together to form a 224* 224*4 4-channel image, since the main task of CNN1 is to classify the palm print image of the real human body and the palm print image of the high-definition screen, in order to enable CNN1 to distinguish the two images well, the two-class crossover is adopted.
  • the entropy loss function constrains the output value of CNN1: if the input sample is a live palm print image, then the output of CNN1 should be a probability value close to 1; and if the input sample is a high-definition screen remake image, the output of CNN1 should be a A probability value close to 0.
  • CNN1 is continuously added with live palm print images and high-definition screen re-shot images, and the cross-entropy loss is continuously reduced through the optimizer. When the cross-entropy loss is reduced to a certain level and no longer has large fluctuations, it is considered that CNN1 The training has converged.
  • the difference between it and CNN1 in the training process lies in the input data.
  • the input size is 122*122*4.
  • the specific implementation method is: first use the palm print The detection tool performs palmprint detection on the sample images (including real-life palmprint images and palmprint images reproduced from paper) respectively, and then extracts the palmprint area on the RGB original image of the sample image according to the detection results, and analyzes the extracted palmprints. Adjust the area to 122*122*3, and use the fast Fourier transform to calculate the corresponding frequency domain map respectively, the size is 122*122*1, and finally connect the RGB map and the frequency domain map together to form a 122*122 *4 4-channel graph.
  • CNN2 Since CNN2 mainly performs two-classification on the palmprint images of real people and reprinted palmprint images of paper, after training, CNN2 will output a probability value close to 0 for the input palmprint image of reprinted paper.
  • CNN2 is continuously added with live palmprint images and paper reprinted palmprint images.
  • the cross-entropy loss is continuously reduced. When the cross-entropy loss is reduced to a certain level and no longer has large fluctuations, The training of CNN2 is considered to have converged.
  • the test images need to be processed in two sizes (224*224*4 and 122*122*4), and then the obtained results are input into CNN1 and CNN2 respectively for score calculation.
  • the type is unknown, and the same method as training is used to obtain four-channel large image data (that is, the size is 224*224*4), and at the same time, palmprint the RGB three-channel image to be detected.
  • Detecting and matting adjust the palmprint image obtained by matting to 122*122*3 size, and calculate the corresponding frequency domain image to obtain four-channel thumbnail data (that is, the size is 122*122*4).
  • the large image data is sent to CNN1, and the small image data is sent to CNN2, and the corresponding probability values are calculated respectively, which are recorded as fraction 1 and fraction 2, both of which are decimals between 0 and 1.
  • the palmprint living body is judged by using the image fused with the frequency domain and RGB information, and the palmprint living body problem is solved by combining two different information sources, which is more robust than the general use of single-source image information for detection. .
  • a two-level detection network is set up, which can perform directional optimization at one level without affecting the detection effect of the other level, avoiding the difficulty of using a single model to optimize, and it takes a lot of time. short, improving the user experience.
  • the image processing method includes the following steps:
  • the sample images in the first sample set and the second sample set include palm prints
  • the labeling category of the first sample image in the first sample set is living palm prints
  • the labeling category of the second sample image in the second sample set is one of a living palmprint image and a palmprint image that is reproduced with a paper sheet
  • the first detection result indicates that the palmprint of the object to be detected belongs to the first probability that the palmprint in the palmprint image of the screen remake, and the first probability is less than the first threshold
  • the second detection result is obtained, and the second detection result is obtained.
  • the living body detection result indicates that the palmprint in the image to be detected is a living palmprint
  • steps in the above flow charts are displayed in sequence according to the arrows, these steps are not necessarily executed in the sequence indicated by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and these steps may be performed in other orders. Moreover, at least a part of the steps in the above flow chart may include multiple steps or multiple stages, these steps or stages are not necessarily executed at the same time, but may be executed at different times, and the execution sequence of these steps or stages is also It does not have to be performed sequentially, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages within the other steps.
  • an image processing apparatus 1100 is provided.
  • the apparatus may adopt software modules or hardware modules, or a combination of the two to become a part of computer equipment.
  • the apparatus specifically includes: an acquisition module 1102, a global detection module 1104, a local detection module 1106, and a determination module 1108, wherein:
  • an acquisition module 1102 configured to acquire an image to be detected, where the image to be detected includes the biological features of the object to be detected;
  • the global detection module 1104 is configured to perform in vivo detection on the global fused image obtained by fusing the global frequency domain map corresponding to the image to be detected and the image to be detected, and obtain the first detection result corresponding to the image to be detected; when When the first detection result indicates that the to-be-detected image belongs to a screen remake image, it is directly determined that the to-be-detected image has not passed the living body detection;
  • the local detection module 1106 is configured to obtain a biometric image based on the biometrics in the to-be-detected image when the first detection result indicates that the to-be-detected image does not belong to the screen remake image, Perform in vivo detection on the corresponding local frequency domain map and the local fusion image obtained by fusion of the biometric image, and obtain a second detection result corresponding to the to-be-detected image;
  • a determination module 1108, configured to determine a living body detection result corresponding to the to-be-detected image according to the first detection result and the second detection result.
  • the device specifically includes:
  • an acquisition module 1102 configured to acquire an image to be detected, where the image to be detected includes the palm print of the object to be detected;
  • the global detection module 1104 is configured to perform in vivo detection on the global fused image obtained by fusing the global frequency domain map corresponding to the image to be detected and the image to be detected, and obtain the first detection result of palmprints in the image to be detected.
  • the image to be detected belongs to the screen remake image, it is directly determined that the image to be detected has not passed the living body detection;
  • the local detection module 1106 is configured to obtain a palmprint image based on the palmprint in the to-be-detected image when the first detection result indicates that the to-be-detected image does not belong to the screen remake image, and to compare the local frequency domain map corresponding to the palmprint image. Perform in vivo detection on the local fusion image obtained by fusion with the palmprint image, and obtain a second detection result corresponding to the to-be-detected image;
  • the determination module 1108 is configured to determine the biometric detection result of the palmprint of the object to be detected according to the first detection result and the second detection result.
  • the image processing apparatus 1100 further includes a fusion module for performing frequency domain transformation processing on the image to be detected to obtain a global frequency domain map; and fusing the to-be-detected image with the global frequency domain map to obtain a global fused image.
  • the global detection module 1104 is further configured to input the global fusion image into the trained living detection model; extract the image features of the global fusion image through the global detection network in the living detection model, and output the image features to be detected based on the image features The probability that the palmprint of the object to be detected in the image belongs to the palmprint in the palmprint image that is reproduced by the screen is taken as the first detection result.
  • the local detection module 1106 is further configured to perform palmprint detection on the image to be detected to determine a palmprint region in the image to be detected; crop the image to be detected according to the palmprint region to obtain a palmprint image.
  • the image processing apparatus further includes a fusion module, configured to perform frequency domain transformation processing on the palmprint image to obtain a local frequency domain map; and fuse the palmprint image with the local frequency domain map to obtain a local fusion image.
  • a fusion module configured to perform frequency domain transformation processing on the palmprint image to obtain a local frequency domain map; and fuse the palmprint image with the local frequency domain map to obtain a local fusion image.
  • the local detection module 1106 is further configured to input the global fusion image into the trained living detection model; extract the image features of the local fusion image through the local detection network in the living detection model, and output the to-be-detected image based on the image characteristics
  • the palm print of the object to be detected in the image belongs to the probability that the palm print in the palm print image is reproduced with a piece of paper, as the second detection result.
  • the determining module 1108 is further configured to, when the first detection result indicates that the palmprint of the object to be detected belongs to the first probability of the palmprint in the palmprint image of the screen duplication, and the first probability is smaller than the first threshold, then Acquiring a second detection result, where the second detection result indicates that the palmprint of the object to be detected belongs to the second probability that the palmprint in the palmprint image is reproduced with a piece of paper; when the second probability is less than the second threshold, the palmprint of the object to be detected is determined For the living palm print.
  • the determining module 1108 is further configured to determine, when the first probability is greater than the first threshold, the palmprint of the object to be detected as the palmprint in the palm print image of the screen remake; when the second probability is greater than the second threshold is determined, the palm print of the object to be detected is determined to be the palm print in the palm print image reproduced by the piece of paper.
  • the image processing device further includes an image acquisition module for acquiring the original acquired image; adjusting the acquired image to a first preset size to obtain an image to be detected; performing palmprint detection on the acquired image to determine the acquired image After cropping the palmprint region from the captured image, adjust the palmprint region to the second preset size to obtain a palmprint image.
  • an image acquisition module for acquiring the original acquired image; adjusting the acquired image to a first preset size to obtain an image to be detected; performing palmprint detection on the acquired image to determine the acquired image After cropping the palmprint region from the captured image, adjust the palmprint region to the second preset size to obtain a palmprint image.
  • the image processing apparatus further includes a training module, and the training module includes a sample image acquisition unit, a global detection network training unit and a local detection network training unit;
  • the sample image acquisition unit is configured to acquire a first sample set and a second sample set, and the sample images in the first sample set and the second sample set include palm prints;
  • the global detection network training unit is used to perform model training on the first model based on the neural network using the first sample image in the first sample set to obtain a global detection network;
  • the local detection network training unit is configured to obtain a sample palmprint image according to the palmprint area of the second sample image in the second sample set, and use the sample palmprint image to perform model training on the second neural network-based model to obtain a local detection network.
  • the global detection network training unit is specifically configured to perform in vivo detection on the global fusion image obtained by fusing the global frequency domain map corresponding to the first sample image with the first sample image by using the first model to obtain the first model.
  • the local detection network training unit is specifically configured to perform live detection on a local fused image obtained by fusing a local frequency domain image corresponding to the sample palmprint image with the sample palmprint image by using a second model, to obtain a second sample image
  • the second detection result of the middle palmprint; the second loss is determined according to the second detection result and the labeling category of the second sample image; after adjusting the model parameters of the second model according to the second loss, the training is continued until the local detection network is obtained at the end of the training .
  • the image processing apparatus further includes a palmprint recognition module, configured to perform palmprint recognition on the palmprint in the to-be-detected image when the living body detection result indicates that the palmprint in the image to be detected is a living palmprint, and obtain Palmprint recognition result; perform identity authentication on the object to be detected according to the palmprint recognition result.
  • a palmprint recognition module configured to perform palmprint recognition on the palmprint in the to-be-detected image when the living body detection result indicates that the palmprint in the image to be detected is a living palmprint, and obtain Palmprint recognition result; perform identity authentication on the object to be detected according to the palmprint recognition result.
  • a processing apparatus 1200 for a living body detection model is provided.
  • the apparatus may use software modules or hardware modules, or a combination of the two to become a part of computer equipment.
  • the apparatus specifically includes : a global detection network training module 1202, a local detection network training module 1204 and a detection model acquisition module 1206, wherein:
  • the global detection network acquisition module 1202 is configured to acquire the first sample image in the first sample set, and pair the global frequency domain map corresponding to the first sample image with the first model pair based on the neural network.
  • a global fusion image obtained by fusion of sample images is subjected to in vivo detection, and a first detection result corresponding to the first sample image is obtained, and based on the first detection result and the labeling category of the first sample image.
  • the determined first loss adjusts the model parameters of the first model, it returns to the step of obtaining the first sample image in the first sample set to continue training until the global detection network is obtained when the training ends;
  • the local detection network acquisition module 1204 is configured to acquire the second sample image in the second sample set, obtain the sample biometric image according to the biological characteristics of the second sample image, and combine the sample with the second model based on the neural network.
  • the local frequency domain map corresponding to the biometric image and the local fusion image obtained by fusion of the sample biometric image are subjected to in vivo detection, and the second detection result corresponding to the second sample image is obtained, and based on the second detection result and the After adjusting the model parameters of the second model, the second loss determined by the labeling category of the second sample image returns to the step of obtaining the second sample image in the second sample set to continue training until the training ends.
  • described local detection network
  • the detection model obtaining module 1206 is configured to obtain a living body detection model for performing living body detection on an image according to the global detection network and the local detection network.
  • the device specifically includes:
  • the global detection network training module 1202 is used to obtain the first sample image in the first sample set, and pair the global frequency domain map corresponding to the first sample image with the first sample image through the first model based on the neural network
  • the global fusion image obtained by fusing the obtained global fusion image is subjected to in vivo detection, obtaining the first detection result of the palmprint in the first sample image, and adjusting the first loss according to the first loss determined based on the first detection result and the labeling category of the first sample image.
  • After the model parameters of the model return to the step of obtaining the first sample image in the first sample set and continue training until the global detection network is obtained when the training ends;
  • the local detection network training module 1204 is used to obtain the second sample image in the second sample set, obtain the sample palmprint image according to the palmprint area of the second sample image, and pair the sample palmprint image with the second model based on the neural network.
  • the corresponding local frequency domain map and the local fusion image obtained by the fusion of the sample palmprint image are subjected to in vivo detection, and the second detection result of the palmprint in the second sample image is obtained, and based on the second detection result and the labeling category of the second sample image
  • the determined second loss adjusts the model parameters of the second model and returns to the step of obtaining the second sample image in the second sample set to continue training until the local detection network is obtained when the training ends;
  • the detection model obtaining module 1206 is configured to obtain a living body detection model for performing palmprint living body detection on the image according to the global detection network and the local detection network.
  • Each module in the image processing device and the processing device of the living body detection model can be implemented in whole or in part by software, hardware, and combinations thereof.
  • the above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device may be a terminal or a server, and its internal structure diagram may be as shown in FIG. 13 .
  • the computer equipment When the computer equipment is a terminal, it may also include an image acquisition device, such as a camera.
  • the computer device includes a processor, memory, and a network interface connected by a system bus. Among them, the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium, an internal memory.
  • the non-volatile storage medium stores an operating system and computer-readable instructions.
  • the internal memory provides an environment for the execution of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the network interface of the computer device is used to communicate with other external computer devices through a network connection.
  • the computer-readable instructions when executed by the processor, implement an image processing method and/or a processing method of a living body detection model.
  • FIG. 13 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied. Include more or fewer components than shown in the figures, or combine certain components, or have a different arrangement of components.
  • a computer device including a memory and a processor, where computer-readable instructions are stored in the memory, and when the processor executes the computer-readable instructions, the steps in the foregoing method embodiments are implemented.
  • a computer-readable storage medium which stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, implements the steps in the foregoing method embodiments.
  • a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the steps in the foregoing method embodiments.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical memory, and the like.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • the RAM may be in various forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

一种图像处理方法,包括:将待检测图像对应的全局频域图与待检测图像融合,获得全局融合图像,对全局融合图像进行活体检测,获得第一检测结果;当第一检测结果表示待检测图像属于屏幕翻拍图像时,则确定待检测图像未通过活体检测;否则,基于待检测图像中的生物特征获得生物特征图像,将生物特征图像对应的局部频域图与生物特征图像融合,获得局部融合图像,对局部融合图像进行活体检测,获得第二检测结果,根据第一检测结果与第二检测结果确定待检测图像对应的活体检测结果。

Description

图像处理方法、装置、设备、存储介质计算机程序产品
本申请要求于2021年04月02日提交中国专利局,申请号为202110359536.2,申请名称为“图像处理方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,特别是涉及一种图像处理方法、装置、计算机设备、存储介质和计算机程序产品。
背景技术
随着计算机技术和人工智能技术的发展,为了更准确、便捷地鉴定用户身份,出现了活体检测技术。活体检测技术能够利用生物特征信息验证用户是否为真实活体操作,可有效抵御照片、换脸、面具、遮挡等常见的攻击手段,活体检测包括人脸活体检测、掌纹活体检测、虹膜活体检测,等等。
目前,由于实际生活中可能存在许多不同种类的攻击类型,有的攻击图像的纹理特征并不那么明显,甚至是肉眼都难以根据攻击图像的纹理进行区分,使得仅靠单一的图像纹理信息进行活体检测的方式在不同类型的攻击图像上的泛化性较差,检测准确率较低。
发明内容
一种图像处理方法,所述方法包括:
获取待检测图像,所述待检测图像包括待检测对象的生物特征;
将所述待检测图像对应的全局频域图与所述待检测图像融合,获得全局融合图像;
对所述全局融合图像进行活体检测,获得所述待检测图像对应的第一检测结果;
当所述第一检测结果表示所述待检测图像属于屏幕翻拍图像时,则直接确定所述待检测图像未通过活体检测;
当所述第一检测结果表示所述待检测图像不属于屏幕翻拍图像时,则基于所述待检测图像中的生物特征获得生物特征图像,将所述生物特征图像对应的局部频域图与所述生物特征图像融合,获得局部融合图像,对所述局部融合图像进行活体检测,获得所述待检测图像对应的第二检测结果,并根据所述第一检测结果与所述第二检测结果,确定所述待检测图像对应的活体检测结果。
一种活体检测模型的处理方法,所述方法包括:
获取第一样本集合中的第一样本图像,将所述第一样本图像对应的全局频域图与所述第一样本图像融合获得的全局融合图像,通过基于神经网络的第一模型对所述全局融合图像进行活体检测,获得所述第一样本图像对应的第一检测结果,基于所述第一检测结果与所述第一样本图像的标注类别确定第一损失,并根据所述第一损失调整所述第一模型的模型参数后,返回所述获取第一样本集合中的第一样本图像的步骤继续训练,直至训练结束时获得所述全局检测网络;
获取第二样本集合中的第二样本图像,根据所述第二样本图像的生物特征获得样本生物特征图像,将所述样本生物特征图像对应的局部频域图与所述样本生物特征图像融合,获得局部融合图像,通过基于神经网络的第二模型对所述局部融合图像进行活体检测,获得所述第二样本图像对应的第二检测结果,基于所述第二检测结果与所述第二样本图像的标注类别确定第二损失,并根据所述第二损失调整所述第二模型的模型参数后,返回所述获取第二样本集合中的第二样本图像的步骤继续训练,直至训练结束时获得所述局部检测网络;
根据所述全局检测网络与所述局部检测网络,获得用于对图像进行活体检测的活体检测模型。
一种图像处理装置,所述装置包括:
获取模块,用于获取待检测图像,所述待检测图像包括待检测对象的生物特征;
全局检测模块,用于将所述待检测图像对应的全局频域图与所述待检测图像融合,获得全局融合图像,对所述全局融合图像进行活体检测,获得所述待检测图像对应的第一检测结果;当所述第一检测结果表示所述待检测图像属于屏幕翻拍图像时,则直接确定所述待检测图像未通过活体检测;
局部检测模块,用于当所述第一检测结果表示所述待检测图像不属于屏幕翻拍图像时,则基于所述待检测图像中的生物特征获得生物特征图像,将所述生物特征图像对应的局部频域图与所述生物特征图像融合,获得局部融合图像,对所述局部融合图像进行活体检测,获得所述待检测图像对应的第二检测结果;
确定模块,用于根据所述第一检测结果与所述第二检测结果,确定所述待检测图像对应的活体检测结果。
一种活体检测模型的处理装置,所述装置包括:
全局检测网络获取模块,用于获取第一样本集合中的第一样本图像,将所述第一样本图像对应的全局频域图与所述第一样本图像融合获得的全局融合图像,通过基于神经网络的第一模型对所述全局融合图像进行活体检测,获得所述第一样本图像对应的第一检测结果,基于所述第一检测结果与所述第一样本图像的标注类别确定第一损失,并根据所述第一损失调整所述第一模型的模型参数后,返回所述获取第一样本集合中的第一样本图像的步骤继续训练,直至训练结束时获得所述全局检测网络;
局部检测网络获取模块,用于获取第二样本集合中的第二样本图像,根据所述第二样本图像的生物特征获得样本生物特征图像,将所述样本生物特征图像对应的局部频域图与所述样本生物特征图像融合,获得局部融合图像,通过基于神经网络的第二模型对所述局部融合图像进行活体检测,获得所述第二样本图像对应的第二检测结果,基于所述第二检测结果与所述第二样本图像的标注类别确定第二损失,并根据所述第二损失调整所述第二模型的模型参数后,返回所述获取第二样本集合中的第二样本图像的步骤继续训练,直至训练结束时获得所述局部检测网络;
检测模型获取模块,用于根据所述全局检测网络与所述局部检测网络,获得用于对图像进行活体检测的活体检测模型。
一种计算机设备,包括存储器和一个或多个处理器,所述存储器存储有计算机可读指令,所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器实现上述图像处理方法或活体检测模型的处理方法的步骤。
一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器实现上述图像处理方法或活体检测模型的处理方法的步骤。
一种计算机程序,所述计算机程序包括计算机指令,所述计算机指令存储在计算机可读存储介质中,计算机设备的处理器从所述计算机可读存储介质读取所述计算机指令,所述处理器执行所述计算机指令,使得所述计算机设备执行上述图像处理方法或活体检测模型的处理方法的步骤。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中图像处理方法的应用环境图;
图2为一个实施例中图像处理方法的流程示意图;
图3为另一个实施例中图像处理方法的流程的示意图;
图4为一个实施例中获得待检测图像中掌纹的第一检测结果的流程示意图;
图5为一个实施例中获得待检测图像中掌纹的第二检测结果的流程示意图;
图6为一个实施例中掌纹活体检测过程的框架示意图;
图7为一个实施例中全局检测网络的训练步骤的流程示意图;
图8为一个实施例中局部检测网络的训练步骤的流程示意图;
图9为一个实施例中活体检测模型的处理方法的流程示意图;
图10为一个具体的实施例中活体检测模型训练过程的框架示意图;
图11为一个实施例中图像处理装置的结构框图;
图12为一个实施例中活体检测模型的处理练装置的结构框图;
图13为一个实施例中计算机设备的内部结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供的图像处理方法、活体检测模型的处理方法,通过使用人工智能技术(Artificial Intelligence,AI)中的计算机视觉技术以及机器学习技术,实现了活体检测。
本申请各实施例中提及的待检测图像是待进行活体检测的图像。活体检测是确定待检测对象的真实生物特征的过程。待检测图像包括待检测对象的生物特征,该生物特征可唯一确认待检测对象,包括生理特征或行为特征,生理特征包括掌纹、指纹、人脸、虹膜、手形、视网膜、耳廓,等等,行为特征包括步态、笔迹,等等。
本申请各实施例中,待检测对象的生物特征可以是上述生物特征中的任意一种或多种。在一个实施例中,计算机设备获取待检测图像,待检测图像包括待检测对象的生物特征;对待检测图像对应的全局频域图与待检测图像融合获得的全局融合图像进行活体检测,获得待检测图像对应的第一检测结果;当第一检测结果表示待检测图像属于屏幕翻拍图像时,则直接确定待检测图像未通过活体检测;当第一检测结果表示待检测图像不属于屏幕翻拍图像时,则基于待检测图像中的生物特征获得生物特征图像,对将生物特征图像对应的局部频域图与生物特征图像融合获得的局部融合图像进行活体检测,获得待检测图像对应的第二检测结果,并根据第一检测结果与第二检测结果,确定待检测图像对应的活体检测结果。
下文将主要以生物特征为掌纹来对本申请实施例提供的方法进行说明。
本申请提供的图像处理方法,可以应用于如图1所示的应用环境中。其中,终端102通过网络与活体检测服务器104进行通信。具体地,终端102可以获取待检测图像,待检测图像包括待检测对象的掌纹;对将待检测图像对应的全局频域图与待检测图像融合获得的全局融合图像进行活体检测,获得待检测图像中掌纹的第一检测结果;当第一检测结果表示待检测图像属于屏幕翻拍图像时,则直接确定待检测图像未通过活体检测;当第一检测结果表示待检测图像不属于屏幕翻拍图像时,则基于待检测图像中的掌纹区域获得掌纹图像;对将掌纹图像对应的局部频域图与掌纹图像融合获得的局部融合图像进行活体检测,获得待检测图像中掌纹的第二检测结果;根据第一检测结果与第二检测结果,确定待检测对象的掌纹的活体检测结果。
在一些实施例中,终端102可以获取待检测图像;将待检测图像发送至活体检测服务器104,通过活体检测服务器104对将待检测图像对应的全局频域图与待检测图像融合获得的全局融合图像进行活体检测,获得待检测图像中掌纹的第一检测结果;当第一检测结果表示待检测图像属于屏幕翻拍图像时,则直接确定待检测图像未通过活体检测;当第一检测结果表示待检测图像不属于屏幕翻拍图像时,则基于待检测图像中的掌纹区域获得掌纹图像;对将掌纹图像对应的局部频域图与掌纹图像融合获得的局部融合图像进行活体检测,获得待检测图像中掌纹的第二检测结果;根据第一检测结果与第二检测结果,确定待检测对象的掌纹的活体检测结果,并向终端102返回掌纹活体检测结果。
终端102可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑、便携式可穿戴设备和车载终端等,活体检测服务器104可以用独立的服务器或者是多个服务器 组成的服务器集群来实现。
在一个实施例中,基于人工神经网络的活体检测模型可以通过计算机设备训练得到,计算机设备可以获取第一样本集合中的第一样本图像,通过基于神经网络的第一模型对将第一样本图像对应的全局频域图与第一样本图像融合获得的全局融合图像进行活体检测,获得第一样本图像中掌纹的第一检测结果,并根据基于第一检测结果与第一样本图像的标注类别所确定的第一损失调整第一模型的模型参数后返回获取第一样本集合中的第一样本图像的步骤继续训练,直至训练结束时获得全局检测网络;获取第二样本集合中的第二样本图像,根据第二样本图像的掌纹区域获得样本掌纹图像,通过基于神经网络的第二模型对将样本掌纹图像对应的局部频域图与样本掌纹图像融合获得的局部融合图像进行活体检测,获得第二样本图像中掌纹的第二检测结果,并根据基于第二检测结果与第二样本图像的标注类别所确定的第二损失调整第二模型的模型参数后返回获取第二样本集合中的第二样本图像的步骤继续训练,直至训练结束时获得局部检测网络;根据全局检测网络与局部检测网络,获得用于对图像进行掌纹活体检测的活体检测模型。用于训练活体检测模型的计算机设备可以是终端或服务器。
在一个具体的应用场景中,当掌纹验证设备需要对用户进行身份认证时,可以采集用户的掌纹图像,并通过训练好的活体检测模型中的全局检测网络对将待检测图像对应的全局频域图与待检测图像融合获得的全局融合图像进行活体检测,获得待检测图像中掌纹的第一检测结果,通过训练好的活体检测模型中的局部检测网络基于待检测图像中的掌纹区域获得掌纹图像;对将掌纹图像对应的局部频域图与掌纹图像融合获得的局部融合图像进行活体检测,获得待检测图像中掌纹的第二检测结果,最后根据第一检测结果与第二检测结果,确定待检测对象的掌纹的活体检测结果;当活体检测结果指示待检测图像中的掌纹为活体掌纹时,则对待检测图像中的掌纹进行掌纹识别,获得掌纹识别结果;根据掌纹识别结果对待检测对象进行身份认证。其中,掌纹验证设备可以是手机、掌纹验证机或者带有图像采集装置的其它设备。
在一个实施例中,如图2所示,提供了一种图像处理方法,以该方法应用于图1中的计算机设备(终端102或服务器104)为例进行说明,包括以下步骤:
步骤202,获取待检测图像,待检测图像包括待检测对象的掌纹。
待检测图像是待进行掌纹活体检测的图像,待检测图像包括待检测对象的掌纹,待检测图像中的掌纹可以是待检测对象的掌纹的一部分,也可以是待检测对象的掌纹的全部。掌纹活体检测是根据图像中的掌纹来鉴定该图像是否为对真实活体的待检测对象采集的图像的方法,通常用于确定该待检测对象是否包括真实活体,较多地应用于身份验证场景中,可有效抵御翻拍图像的攻击。
具体地,计算机设备可以通过本地的图像采集装置即时采集图像,将采集的图像作为待检测图像,例如上述图1中的终端102通过摄像头采集的用户的手掌图像,作为待检测图像。待检测图像也可以是计算机设备从本地导出的图像,将该图像作为待检测图像,本 地导出的图像可以是事先拍摄好包括掌纹的照片或存储的包括掌纹的照片,但可以理解的是,本地导出的图像通常不会通过掌纹活体检测。可选地,计算机设备采集的待检测图像可以是专注于手掌外部特征的图像,手掌外部特征如手掌线条、褶皱等,待检测图像也可以是专注于手掌结构和内部特征的图像,手掌结构和内部特征如静脉、骨骼、软组织等。
在一个实施例中,活体掌纹的待检测图像可以是非接触式采集得到的,例如,用户隔空伸出手掌,计算机设备通过摄像头隔空扫描并采集手掌图像,得到待检测图像。在另一些实施例中,待检测图像可以是接触式采集得到的,例如,掌纹验证设备设置有掌纹采集触摸屏,用户可以将手掌放置在该掌纹采集触摸屏上,掌纹验证设备可对该手掌采集图像,作为待检测图像。
步骤204,对将待检测图像对应的全局频域图与待检测图像融合获得的全局融合图像进行活体检测,获得待检测图像中掌纹的第一检测结果。
待检测图像是反应图像在空间域(也称时域)上的纹理分布特性的图像,是表示待检测图像在空间域上的灰度分布特性的数据,待检测图像包括三个通道的灰度信息,每个通道可以用待检测图像在每个像素点处的灰度或强度表示。具体地,待检测图像为RGB图像,RGB图像是一个三通道的彩色像素图,该三个通道的彩色像素图分别对应该RGB图像中每个像素点的红色分量、绿色分量和蓝色分量。
频域变换是将待检测图像从灰度分布转换到频域分布的过程,得到的全局频域图表示的是整个待检测图像在频域的特性。频域变换可以是快速傅里叶变换、小波变换或拉普拉斯变换等。
全局频域图是反应待检测图像整体的频域分布特性的特征图,将待检测图像从灰度分布转化到频域分布,可以从整体的频域分布来观察待检测图像的特征。全局表示的是整个待检测图像的图像信息,以示区分,下文提到的待检测图像中的掌纹区域是待检测图像中局部的图像信息。全局频域图是对整个待检测图像进行频域变换获得的频域图。其中,全局频域图中的高频分量表示的是待检测图像中灰度值发生突变的部分,对应了待检测图像中的细节信息,低频分量表示的是待检测图像中灰度值较为平均的部分,对应了待检测图像中的轮廓信息。全局融合图像是将待检测图像与全局频域图融合后获得的多通道图像,全局融合图像携带了待检测图像在空间域与频域上的图像特性。
由于高清翻拍图像的纹理特征不那么明显,例如高清屏幕翻拍掌纹图像和高清纸片翻拍掌纹图像,如果仅依靠对图像本身的细微的纹理和材质进行捕捉从而进行分类,连肉眼都难以区分开,所以这种方式仅能对一些纹理特征比较明显的图像进行检测,检测效果是有限的。对于一些纹理特征明不明显的高清翻拍掌纹图像,在空间域上对这些图像进行处理难以获得较好的效果,而在频域上对这些图像进行检测就可以获得很好的效果。此外,还有一些通过硬件设备可以直接地或者间接地获得掌纹的深度图像,然后通过深度图像来判断当前的掌纹图像是活体掌纹还是一些翻拍图像中的掌纹,然而这种方式的缺点在于采用3D深度信息非常依赖于硬件设备,相对于传统的相机成本较高,且采用结构光传感器 容易受到周围光线环境的影响。
在本申请实施例中,通过融合待检测图像本身的纹理信息与待检测图像的频域信息,获得的全局融合图像后再进行活体检测,结合这两种不同的图像信息使得检测准确性更高,能适应多种不同的场景,并且待检测图像的采集也不需要特定的硬件设备,只需要一张待检测图像即可实现很好的检测效果。
此外,不同攻击类型的翻拍图像具有不同的特点,例如,高清屏幕翻拍掌纹图像的纹理特征,如摩尔纹,存在于前景与背景中,因此对整个图像进行处理能够关注到图像的全局信息。而对于高清纸片翻拍掌纹图像,它与活体掌纹图像在掌纹区域的差异较大,因此,需要更加关注待检测图像在裁剪后的掌纹区域的局部信息,才能获得更准确的与真人掌纹的纹理差异,这部分将在下文进行介绍。
在一个实施例中,计算机设备对待检测图像进行频域变换处理,获得全局频域图;将待检测图像与全局频域图融合,获得全局融合图像。
频域变换可以是快速傅里叶变换、小波变换或拉普拉斯变换等。例如,计算机设备可以对待检测图像进行快速傅里叶变换,得到全局频域图,然后再将三通道的待检测图像与全局频域图按通道融合,获得四通道图像,作为全局融合图像。
在一个实施例中,计算机设备获取到原始的采集图像,将采集图像调整至第一预设尺寸,得到待检测图像,再对该待检测图像进行频域变换处理,获得全局频域图。
在一个具体的实施例中,计算机设备将原始的采集图像缩放至预设尺寸,例如,预设尺寸可以是224*224*3,得到待检测图像,然后对该待检测图像进行快速傅里叶变换处理,获得全局频域图,再将待检测图像与其对应的全局频域图融合到一起形成一个四通道图像,作为全局融合图像。
在一个实施例中,计算机设备可以利用训练好的基于神经网络的全局检测网络,对全局融合图像进行活体检测,获得待检测图像中掌纹的第一检测结果。第一检测结果为该待检测图像通过掌纹活体检测与未通过掌纹活体检测中的一种,也就是待检测图像中待检测对象的掌纹是否为活体掌纹。
由于全局融合图像携带了待检测图像本身的全部纹理信息以及待检测图像全局在频域的特性,结合这两种不同的图像信息使得检测准确性更高,能适应多种不同的场景,尤其是高清的屏幕翻拍图像都可以具备很好的检测效果。
步骤206,当第一检测结果表示待检测图像属于屏幕翻拍图像时,则直接确定待检测图像未通过活体检测。
步骤208,当第一检测结果表示待检测图像不属于屏幕翻拍图像时,则基于待检测图像中的掌纹区域获得掌纹图像,对将掌纹图像对应的局部频域图与掌纹图像融合获得的局部融合图像进行活体检测,获得待检测图像中掌纹的第二检测结果,根据第一检测结果与第二检测结果,确定待检测对象的掌纹的活体检测结果。
活体检测的攻击类型大致可以分为两类,一类是前文已经提到的高清屏幕翻拍图像, 可以基于待检测图像的全局信息进行判断,另一类是高清纸片翻拍图像。纸片翻拍掌纹图像的掌纹区域与活体掌纹有较大的纹理差异,基于此,为了能够应对不同类型的攻击类型,计算机设备设置两级检测,当第一检测结果表示待检测图像不属于屏幕翻拍掌纹图像时,通过进一步对待检测图像中的掌纹区域进行处理,以更加关于待检测图像中的局部信息,也就是掌纹所在区域的图像信息。
掌纹区域是掌纹所在的区域,掌纹所在区域可以是整个手掌所在的区域,也可以是掌心所在的区域,根据掌纹区域获得的掌纹图像可以是对包围待检测图像中的掌纹的矩形区域进行缩放处理得到的。具体地,计算机设备可以在获取到待检测图像后,提取出待检测图像中的掌纹区域,按该掌纹区域裁剪待检测图像,获得掌纹图像。
在一个实施例中,计算机设备可以利用掌纹提取工具对待检测图像进行抠图处理,获得掌纹区域,再将掌纹区域缩放到预设尺寸,获得掌纹图像。
在一个实施例中,计算机设备可以对待检测图像进行掌纹检测,确定待检测图像中的掌纹区域;按掌纹区域裁剪待检测图像,获得掌纹图像。
在一个实施例中,计算机设备可以获取原始的采集图像,对采集图像进行掌纹检测,确定采集图像中的掌纹区域;从采集图像中裁剪出掌纹区域后,将掌纹区域调整至第二预设尺寸,得到掌纹图像。
掌纹图像是反应掌纹在空间域(也称时域)上的纹理分布特性的图像,是表示掌纹图像在空间域上的灰度分布特性的数据,掌纹图像包括三个通道的灰度信息,每个通道可以用掌纹图像在每个像素点处的灰度或强度表示。具体地,掌纹图像为RGB图像,RGB图像是一个三通道的彩色像素图,该三个通道的彩色像素图分别对应该RGB图像中每个像素点的红色分量、绿色分量和蓝色分量。
频域变换是将掌纹图像从灰度分布转换到频域分布的过程,得到的全局频域图表示的是整个掌纹图像在频域的特性。频域变换可以是快速傅里叶变换、小波变换或拉普拉斯变换等。
掌纹区域是待检测图像中局部的图像信息,所以局部频域图是反应待检测图像中掌纹区域的频域分布特性的特征图,将基于待检测图像中的掌纹区域获得的掌纹图像从灰度分布转化到频域分布,可以从整体的频域分布来观察掌纹图像的特征。局部频域图是对掌纹图像进行频域变换获得的频域图。其中,局部频域图中的高频分量表示的是掌纹图像中灰度值发生突变的部分,对应了掌纹图像中的细节信息,低频分量表示的是掌纹图像中灰度值较为平均的部分,对应了掌纹图像中的轮廓信息。局部融合图像是将掌纹图像与局部频域图融合后获得的多通道图像,局部融合图像携带了掌纹图像在空间域与频域上的图像特性。
类似地,对于一些纹理特征明不明显的高清翻拍掌纹图像,在空间域上对这些图像进行处理难以获得较好的效果,而在频域上对这些图像进行检测就可以获得很好的效果,为此,计算机设备通过融合掌纹图像本身的纹理信息与掌纹图像的频域信息,获得的局部融 合图像后再进行掌纹活体的检测,结合这两种不同的图像信息使得检测准确性更高,能适应多种不同的场景。
在一个实施例中,计算机设备对掌纹图像进行频域变换处理,获得局部频域图;将掌纹图像与局部频域图融合,获得局部融合图像。
其中,频域变换可以是快速傅里叶变换、小波变换或拉普拉斯变换等。例如,计算机设备可以对掌纹图像进行快速傅里叶变换,得到局部频域图,然后再将三通道的掌纹图像与局部频域图按通道融合,获得四通道图像,作为局部融合图像。
在一个实施例中,计算机设备获取到原始的采集图像,从采集图像中裁剪出掌纹区域后,将掌纹区域调整至第二预设尺寸,例如,第二预设尺寸可以是122*122*4,从而得到掌纹图像,再将掌纹图像与其对应的局部频域图融合到一起形成一个四通道图像,作为局部融合图像。
在一个实施例中,计算机设备可以利用训练好的基于神经网络的局部检测网络,对局部融合图像进行活体检测,获得待检测图像中掌纹的第二检测结果。第二检测结果为该掌纹图像通过掌纹活体检测与未通过掌纹活体检测中的一种,也就是待检测图像中待检测对象的掌纹是否为活体掌纹。
由于局部融合图像携带了待检测图像中掌纹的纹理信息以及待检测图像中掌纹在频域的特性,结合这两种不同的图像信息使得检测准确性更高,能适应多种不同的场景,尤其是高清的纸片翻拍图像都可以具备很好的检测效果。
需要说明的是,步骤204与步骤208是两个独立的步骤,其执行顺序可以调换,也可以并行执行。
经过前文的介绍可知,计算机设备获得的第一检测结果是利用待检测图像的全局信息进行活体检测得到的检测结果,第二检测结果是利用更加关注到待检测图像中的局部信息进行活体检测得到的检测结果,结合这两个检测结果共同确定待检测图像中掌纹的活体检测结果,更为准确。其中,第一检测结果与第二检测结果均表示待检测图像中的掌纹属于翻拍图像中的掌纹的概率,第一检测结果与第二检测结果可以均表示待检测图像中的掌纹属于活体掌纹的概率。
在一个实施例中,根据第一检测结果与第二检测结果,确定待检测对象的掌纹的活体检测结果,包括:当第一检测结果表示待检测对象的掌纹属于屏幕翻拍掌纹图像中的掌纹的第一概率,且第一概率小于第一阈值时,则获取第二检测结果,第二检测结果表示待检测对象的掌纹属于纸片翻拍掌纹图像中的掌纹的第二概率;当第二概率小于第二阈值时,确定待检测对象的掌纹为活体掌纹。
由于屏幕翻拍掌纹图像的纹理不够明显,当第一检测结果表示待检测图像中的掌纹属于屏幕翻拍掌纹图像中的掌纹的第一概率小于第一阈值时,说明该待检测图像不属于屏幕翻拍图像,可能是活体掌纹图像,进一步根据更加关注到局部信息进行活体检测获得的第二检测结果,当第二检测结果表示待检测图像中的掌纹属于纸片翻拍掌纹图像中的掌纹的 第二概率小于第二阈值时,就能更加肯定待检测图像中的掌纹为活体掌纹,待检测图像通过活体检测。其中,第一阈值与第二阈值可以是相同的数值,也可以是不同的数值,例如,第一阈值与第二阈值均可以取0.4。
在一个实施例中,上述方法还包括:当第一概率大于第一阈值时,则确定待检测对象的掌纹为屏幕翻拍掌纹图像中的掌纹;当第二概率大于第二阈值时,则确定待检测对象的掌纹为纸片翻拍掌纹图像中的掌纹。
本实施例中,当第一概率大于第一阈值时,则可直接确定待检测图像中的掌纹为屏幕翻拍掌纹图像中的掌纹,待检测图像不能通过活体检测,当第一概率小于第一阈值时,则进一步根据更加关注到局部信息进行活体检测获得的第二检测结果,当第二检测结果表示待检测图像中的掌纹属于纸片翻拍掌纹图像中的掌纹的第二概率大于第二阈值时,确定待检测对象的掌纹为纸片翻拍掌纹图像中的掌纹,待检测图像未能通过活体检测。
在一个实施例中,根据第一检测结果与第二检测结果,确定待检测对象的掌纹的活体检测结果,包括:当第一检测结果表示待检测对象的掌纹属于活体掌纹的第一概率,第二检测结果表示待检测对象的掌纹属于活体掌纹的第二概率时,若第一概率小于第一阈值时,则直接确定待检测图像为翻拍掌纹图像,若第一概率大于第一阈值则进一步检查第二概率是否大于第二阈值,若第二概率小于第二阈值,则直接确定待检测图像为翻拍掌纹图像,若第二概率也大于第二阈值,则最终确定待检测图像为活体掌纹图像。
如图3所示,为一个实施例中图像处理方法的流程示意图。参照图3,计算机设备获得待检测图像后,利用待检测图像获得掌纹图像,分为两路并行地对待检测图像与掌纹图像进行活体检测,分别得到各自对应的频域图,再将每一路的输入图像与对应的频域图融合后再进行活体检测,获得各自对应的检测结果,再利用各自的检测结果确定待检测图像最终的掌纹活体检测结果。
在一个实施例中,上述方法还包括:当活体检测结果指示待检测图像中的掌纹为活体掌纹时,则对待检测图像中的掌纹进行掌纹识别,获得掌纹识别结果;根据掌纹识别结果对待检测对象进行身份认证。
待检测图像可以是待进行掌纹识别的图像,本申请实施例提供的掌纹活体检测的步骤可以直接部署于掌识别步骤之前,当待检测图像通过掌纹活体检测之后,再进入到后续的掌纹识别流程中,当掌纹识别通过时则确定待检测图像通过身份认证;当待检测图像没有通过掌纹活体检测时,可以进行报错并提示重试。该掌纹活体检测可以应用到线上掌纹支付、线下掌纹支付、掌纹门禁解锁系统、手机掌纹识别以及自动掌纹识别等场景中。
上述图像处理方法,一方面,通过图像频域信息与图像本身的纹理信息融合后获得的融合图像进行活体检测,结合这两种不同的图像信息使得检测准确性更高,能适应多种不同的场景,并且待检测图像的采集也不需要特定的硬件设备,在不同的光照环境下都可以具有良好的性能,因此具有更高的普适性;另一方面,为克服不同攻击类型的翻拍图像的特点不同,导致单个模型无法适应不同攻击类型的问题,采用了两级检测,也就是,针对 整个图像的全局检测与针对掌纹区域的局部检测,这样针对不同的攻击类型都能够具备较好的检测效果,此外,还可以对其中任意一个检测过程进行定向优化而不影响另一个检测过程的检测效果。
在一个实施例中,如图4所示,对将待检测图像对应的全局频域图与待检测图像融合获得的全局融合图像进行活体检测,获得待检测图像中掌纹的第一检测结果,包括:
步骤402,将全局融合图像输入训练好的活体检测模型。
其中,活体检测模型是计算机设备事先通过多个样本图像进行学习,从而具备对图像进行掌纹活体检测能力的机器学习模型。用于训练的计算机设备可以是终端,也可以是服务器。该活体检测模型可以采用神经网络模型,例如卷积神经网络模型实现。
活体检测模型包括全局检测网络和局部检测网络。通过将待检测图像的全局特征与掌纹区域特征相结合采用两级网络进行判断,两级网络分别对屏幕翻拍掌纹图像进行防御以及纸片翻拍掌纹图像进行防御。需要说明的是,这两级检测网络的检测先后顺序可以是任意的,本申请实施例不作限制。
在一个实施例中,计算机设备可以事先设置两级网络各自的模型结构,得到各自的初始神经网络模型,再通过样本图像及对应的标注类别,对该初始神经网络模型进行训练,得到各自训练好的模型参数。这样,在需要对待检测图像进行掌纹活体检测时,计算机设备就可以获取事先训练好各自的模型参数,将该模型参数导入事先设置各自的神经网络模型的模型结构后,根据该两级检测网络得到活体检测模型。
步骤404,通过活体检测模型中的全局检测网络,提取全局融合图像的图像特征,并基于图像特征输出待检测图像中待检测对象的掌纹属于屏幕翻拍掌纹图像中的掌纹的概率,作为第一检测结果。
在本实施例中,活体检测模型包括全局检测网络和局部检测网络。全局检测网络与局部检测网络可以均是基于卷积神经网络实现的网络结构。全局检测网络与局部检测网络由于处理的图像的尺寸不同,因此是分开训练得到的。全局检测网络用于对待检测图像进行掌纹活体检测,得到待检测图像对应的第一检测结果,局部检测网络用于对从待检测图像中抠出的掌纹图像进行掌纹活体检测,得到待检测图像对应的第二检测结果。
图像特征是图像区别于其他图像的特点或特性,或是这些特点和特性的集合,是用于描述一副图像的图像描述量。对于图像而言,每一幅图像都具有能够区别于其他类图像的自身特征,如亮度、边缘、纹理和色彩等,有些则是需要通过变换或处理才能得到的,如频谱、直方图以及主成份等。全局融合图像是待检测图像与其对应的全局频域图融合得到的四通道图像,全局融合图像的图像特征就隐藏在这四通道矩阵中,计算机设备可以通过全局检测网络从待检测图像中提取图像特征,提取的图像特征不仅要能够很好地描述原始的待检测图像,还需要能够很好地区分该待检测图像与其他图像。就提取的图像特征而言,活体掌纹图像之间差异较小,而活体掌纹图像与翻拍掌纹图像之间差异较大。
需要说明的是,本申请实施例对全局检测网络及局部检测网络内部的网络结构不作限 制,设计人员可以按照实际需求进行设置,只要全局检测网络和局部检测网络能够实现对图像进行掌纹活体检测即可。例如,全局检测网络与局部检测网络均可以采用Resnet18作为网络主干,Resnet18具有良好的分类性能,同时其不太深的网络层数也保证了前向推理(forward inference)的时效性。
在一个实施例中,如图5所示,对将掌纹图像对应的局部频域图与掌纹图像融合获得的局部融合图像进行活体检测,获得待检测图像中掌纹的第二检测结果,包括:
步骤502,将局部融合图像输入训练好的活体检测模型。
如前文,局部融合图像是待检测图像的掌纹图像与掌纹图像对应的局部频域图融合得到的图像,是更加关注待检测图像中掌纹区域的局部信息的图像。计算机设备可以将其输入训练好的活体检测模型,活体检测模型中的局部检测网络,对其进行活体检测,实现对待检测图像的二级检测。
步骤504,通过活体检测模型中的局部检测网络,提取局部融合图像的图像特征,并基于图像特征输出待检测图像中待检测对象的掌纹属于纸片翻拍掌纹图像中的掌纹的概率,作为第二检测结果。
在一个具体的实施例中,计算机设备可以通过设置基于神经网络的第一模型与第二模型,在通过样本图像分别对第一模型与第二模型进行模型训练后,分别获得训练好的全局检测网络与局部检测网络,级联后获得训练好的掌纹活体检测。
在一个实施例中,如图6所示,为一个实施例中图像处理方法的框架示意图,参照图6,计算机设备获取待进行掌纹活体检测的待检测图像,将其调整至224*224*3的尺寸大小的RGB图像后对其进行频域变换获得对应的全局频域图,再将待检测图像的RGB图像和全局频域图连接到一起形成一个224*224*4的4通道图,即全局融合图像,输入至全局检测网络获得该待检测图像通过掌纹活体检测的第一检测结果,同时,计算机设备对待检测图像进行掌纹区域抠图后获得掌纹区域,将其调整至122*122*3的尺寸大小获得掌纹图像,对其进行频域变换获得对应的局部频域图,再将掌纹图像与局部频域图连接到一起形成一个122*122*4的4通道图,即局部融合图像,输入至局部检测网络获得该待检测图像通过掌纹活体检测的第二检测结果,该待检测图像通过掌纹活体检测的最终判断逻辑如下:如果第一检测结果<0.4则直接判定该待检测图像为翻拍掌纹图像,若第一检测结果大于0.4则检查第二检测结果是否小于0.4,若是则直接判定该待检测图像为翻拍掌纹图像;若第二检测结果也大于0.4,则最终确定待检测图像为活体掌纹图像。
在一个实施例中,在步骤202之前,上述图像处理方法还包括模型的训练步骤,具体包括:获取第一样本集合和第二样本集合,第一样本集合与第二样本集合中的样本图像包括掌纹;使用第一样本集合中的第一样本图像对基于神经网络的第一模型进行模型训练,得到全局检测网络;根据第二样本集合中第二样本图像的掌纹区域获得样本掌纹图像,使用样本掌纹图像对基于神经网络的第二模型进行模型训练,得到局部检测网络。
第一模型与第二模型是事先设置的模型结构,其模型参数是初始的模型参数,初始的 模型参数经过不断地训练得到更新,从而得到训练好的模型参数,将训练好的模型参数导入同样框架的模型就得到了具备掌纹活体检测能力的全局检测网络与局部检测网络,从而得到活体检测模型。需要说明的是,全局检测网络与局部检测网络可以部署在同一个计算机设备上,也可用分开部署,这样可以对待检测图像进行并行检测,提高检测效率。
在一个实施例中,第一样本集合中第一样本图像的标注类别为活体掌纹图像和屏幕翻拍掌纹图像中的一种,第二样本集合中第二样本图像的标注类别为活体掌纹图像和纸片翻拍掌纹图像中的一种。
在一个实施例中,计算机设备可以先对第二样本图像进行掌纹检测,确定第二样本图像中的掌纹区域;按掌纹区域裁剪第二样本图像,获得样本掌纹图像。
具体地,计算机设备可以获取第一样本集合与第二样本集合,利用第一样本集合中的第一样本图像对第一模型的模型参数进行调整,该第一样本集合中的每个训练样本包含一个第一样本图像及该第一样本图像对应的标注类别。在进行模型训练时,依次将每个第一样本图像及对应的标注类别作为输入,第一样本图像输入至第一模型中进行处理,根据当前模型输出的处理结果与第一样本图像的标注类别所构建的损失调整模型参数,再基于调整后的模型参数处理下一个训练样本,不断重复,直至得到训练好的全局检测网络。
类似地,利用第二样本集合中的第二样本图像对第二模型的模型参数进行调整,该第二样本集合中的每个训练样本包含一个第二样本图像及该第二样本图像对应的标注类别,在进行模型训练时,依次将每个第二样本图像及对应的标注类别作为输入,根据第二样本图像获得样本掌纹图像,样本掌纹图像输入至第二模型中进行处理,根据当前模型输出的处理结果与第二样本图像的标注类别所构建的损失调整模型参数,再基于调整后的模型参数处理下一个训练样本,不断重复,直至得到训练好的局部检测网络。
在一个实施例中,如图7所示,全局检测网络的训练步骤包括步骤702至步骤706:
步骤702,通过第一模型对将第一样本图像对应的全局频域图与第一样本图像融合获得的全局融合图像进行活体检测,获得第一样本图像中掌纹的第一检测结果。
可选地,计算机设备可以对第一样本图像进行频域变换处理,获得第一样本图像对应的全局频域图;将第一样本图像与全局频域图融合,获得全局融合图像。计算机设备通过第一模型对该全局融合图像进行活体检测,获得第一样本图像中掌纹的第一检测结果。
可选地,计算机设备可以通过第一模型,提取全局融合图像的图像特征,并基于图像特征输出第一样本图像中掌纹属于屏幕翻拍掌纹图像中的掌纹的概率,作为第一检测结果。
可选地,计算机设备可以将第一样本图像调整至第一预设尺寸后,再对调整后的图像进行频域变换处理,获得全局频域图,并将调整后的图像与全局频域图融合,获得全局融合图像。
步骤704,根据第一检测结果与第一样本图像的标注类别确定第一损失。
第一样本图像的标注类别为活体掌纹与非活体掌纹中的一种,可以用0或1表示, 例如当第一样本图像中的掌纹,属于屏幕翻拍掌纹图像中掌纹时,对应的标注类别可以用1表示,当第一样本图像中的掌纹,属于活体掌纹时,对应的标注类别可以用0表示。第一检测结果可以是第一样本图像中掌纹属于屏幕翻拍掌纹图像中的掌纹的概率,终端可以获取第一样本图像的标注类别与通过第一模型进行活体检测得到的第一检测结果,基于这二者之间的差异确定第一损失。第一损失可以是交叉熵损失。
步骤706,根据第一损失调整第一模型的模型参数后继续训练,直至训练结束时获得全局检测网络。
其中,第一损失用于将第一模型往减小第一样本图像的标注类别与第一检测结果之间的差异的方向调整,这样才能保证训练得到的全局检测网络具备对待检测图像进行掌纹活体检测的准确性。
具体地,终端在得到第一损失后,在调整模型参数时,可以采用随机梯度下降算法,将模型参数往减小第一样本图像对应的标注类别与第一检测结果之间的差异的方向进行调整,这样经过多次调整之后就可以得到能够准确进行活体检测的全局检测网络。
在一个实施例中,如图8所示,局部检测网络的训练步骤包括步骤802至步骤806:
步骤802,通过第二模型对将样本掌纹图像对应的局部频域图与样本掌纹图像融合获得的局部融合图像进行活体检测,获得第二样本图像中掌纹的第二检测结果。
可选地,计算机设备可以对样本掌纹图像进行频域变换处理,获得对应的局部频域图;将样本掌纹图像与局部频域图融合,获得局部融合图像。计算机设备通过第二模型对该局部融合图像进行活体检测,获得第二样本图像中掌纹的第二检测结果。
可选地,计算机设备可以通过第二模型,提取局部融合图像的图像特征,并基于图像特征输出第二样本图像中掌纹属于纸片翻拍掌纹图像中的掌纹的概率,作为第二检测结果。
可选地,计算机设备可以将样本掌纹图像调整至第二预设尺寸后,再对调整后的图像进行频域变换处理,获得全局频域图,并将调整后的图像与全局频域图融合,获得全局融合图像。
步骤804,根据第二检测结果与第二样本图像的标注类别确定第二损失。
其中,第二样本图像的标注类别为活体掌纹与非活体掌纹中的一种,可以用0或1表示,例如当第二样本图像中的掌纹,属于纸片翻拍掌纹图像中掌纹时,对应的标注类别可以用1表示,当第二样本图像中的掌纹,属于活体掌纹时,对应的标注类别可以用0表示。第二检测结果可以是第二样本图像中掌纹属于纸片翻拍掌纹图像中的掌纹的概率,终端可以获取第二样本图像的标注类别与通过第二模型进行活体检测得到的第二检测结果,基于这二者之间的差异确定第二损失。第二损失可以是交叉熵损失。
步骤806,根据第二损失调整第二模型的模型参数后继续训练,直至训练结束时获得局部检测网络。
第二损失用于将第二模型往减小第二样本图像的标注类别与第二检测结果之间的差 异的方向调整,这样才能保证训练得到的局部检测网络具备对待检测图像进行掌纹活体检测的准确性。
具体地,终端在得到第二损失后,在调整模型参数时,可以采用随机梯度下降算法,将模型参数往减小第二样本图像对应的标注类别与第二检测结果之间的差异的方向进行调整,这样经过多次调整之后就可以得到能够准确进行活体检测的局部检测网络。
在一个实施例中,如图9所示,提供了一种活体检测模型的处理方法,以该方法应用于计算机设备为例进行说明,包括以下步骤:
步骤902,获取第一样本集合中的第一样本图像,通过基于神经网络的第一模型对将第一样本图像对应的全局频域图与第一样本图像融合获得的全局融合图像进行活体检测,获得第一样本图像中掌纹的第一检测结果,并根据基于第一检测结果与第一样本图像的标注类别所确定的第一损失调整第一模型的模型参数后返回获取第一样本集合中的第一样本图像的步骤继续训练,直至训练结束时获得全局检测网络。
步骤904,获取第二样本集合中的第二样本图像,根据第二样本图像的掌纹区域获得样本掌纹图像,通过基于神经网络的第二模型对将样本掌纹图像对应的局部频域图与样本掌纹图像融合获得的局部融合图像进行活体检测,获得第二样本图像中掌纹的第二检测结果,并根据基于第二检测结果与第二样本图像的标注类别所确定的第二损失调整第二模型的模型参数后返回获取第二样本集合中的第二样本图像的步骤继续训练,直至训练结束时获得局部检测网络。
步骤906,根据全局检测网络与局部检测网络,获得用于对图像进行掌纹活体检测的活体检测模型。
关于上述步骤的具体实施例可参考前文关于模型训练的实施例中的描述。
上述活体检测模型的处理方法中,基于神经网络的活体检测模型中的全局检测网络与局部检测网络经过独立训练得到,全局检测网络能够对将图像频域信息与图像本身的纹理信息融合后获得的融合图像进行掌纹活体的检测,结合这两种不同的图像全局信息使得检测准确性更高,能适应多种不同的场景,如屏幕翻拍掌纹图像的检测;局部检测网络能够对将图像掌纹区域的频域信息与图像掌纹区域本身的纹理信息融合后获得的融合图像进行掌纹活体的检测,由于更加关注了图像局部信息,能够进一步提升检测准确性。
如图10所示,为一个具体的实施例中活体检测模型的训练过程的框架示意图。参照图10,计算机设备设置CNN1与CNN2,在模型训练阶段,对CNN1来说,输入尺寸为224*224*4,具体实现方式为:首先调整样本图像(包括真人活体掌纹图像与高清屏幕翻拍图像)到224*224*3,然后利用快速傅里叶变换分别计算各自对应的频域图,大小为224*224*1,分别将它们的RGB图和频域图连接到一起形成一个224*224*4的4通道图,由于CNN1的主要任务是对真人活体掌纹图像和高清屏幕翻拍掌纹图像进行分类,为了使得CNN1能够很好的对这两种图像进行判别,采用二分类的交叉熵损失函数约束CNN1的输出值:如果输入的样本为真人活体掌纹图像,那么CNN1的输出应该是一个靠近1的 概率值;而如果输入的样本为高清屏幕翻拍图像,CNN1的输出应该是一个靠近0的概率值。训练过程中,不断的给CNN1添加真人活体掌纹图像与高清屏幕翻拍图像,通过优化器,不断的降低交叉熵损失,当交叉熵损失降低到一定程度不再有大的波动时,就认为CNN1的训练已经收敛。
类似地,对于CNN2来说,它和CNN1训练过程中的差别就在于输入的数据,在模型训练阶段,对于CNN_2来说,输入尺寸为122*122*4,具体实现方式为:首先利用掌纹检测工具分别对样本图像(包括真人活体掌纹图像和纸片翻拍掌纹图像)进行掌纹检测,然后根据检测结果在样本图像的RGB原图上抠出掌纹区域,对抠出来的掌纹区域进行调整122*122*3,并利用快速傅里叶变换分别计算各自对应的频域图,大小为122*122*1,最后分别将RGB图和频域图连接到一起形成一个122*122*4的4通道图。由于CNN2主要是针对真人活体掌纹图像和纸片翻拍掌纹图像进行二分类,训练完毕以后,CNN2会对输入的纸片翻拍掌纹图像输出一个靠近0的概率值。训练过程中,不断的给CNN2添加真人活体掌纹图像与纸片翻拍掌纹图像,通过优化器,不断的降低交叉熵损失,当交叉熵损失降低到一定程度不再有大的波动时,就认为CNN2的训练已经收敛。
在模型测试阶段,需要对测试图片进行两种尺寸(224*224*4和122*122*4)的处理,然后将所得到的结果分别输入到CNN1和CNN2中进行分数的计算。假设现在传来了一张待检测图像,类型未知,采用和训练同样的方式获取四通道的大图数据(即尺寸为224*224*4),同时对RGB三通道的待检测图像进行掌纹检测和抠图,将抠图得到的掌纹图像调整至122*122*3大小,并计算对应的频域图获取四通道的小图数据(即尺寸为122*122*4)。把大图数据送入CNN1,小图数据送入CNN2,分别计算对应的概率值,记为分数1和分数2,均为0~1之间的小数。活体掌纹图像与翻拍掌纹图像的判断逻辑如下:如果分数1<0.4类型直接判断为翻拍掌纹图像并退出,否则查看分数2的值;如果分数2<0.4,则类型判为翻拍掌纹图像;如果分数2>=0.4,则类型判为活体掌纹图像。
在上述实施例中,通过采用频域和RGB信息融合的图像进行掌纹活体的判断,结合两种不同的信息源去解决掌纹活体问题,相对于普遍采用单源图像信息进行检测更加鲁棒。此外,针对不同的的攻击类型,设置了两级检测网络,可以在对应的某一级进行定向优化而不影响另一级的检测效果,避免了采用单个模型优化困难的情况,而且耗时较短,提高了用户体验。
在一个具体的实施例中,图像处理方法包括以下步骤:
1、获取第一样本集合和第二样本集合,第一样本集合与第二样本集合中的样本图像包括掌纹,第一样本集合中第一样本图像的标注类别为活体掌纹图像和屏幕翻拍掌纹图像中的一种,第二样本集合中第二样本图像的标注类别为活体掌纹图像和纸片翻拍掌纹图像中的一种;
2、通过第一模型对将第一样本图像对应的全局频域图与第一样本图像融合获得的全局融合图像进行活体检测,获得第一样本图像中掌纹的第一检测结果;
3、根据第一检测结果与第一样本图像的标注类别确定第一损失;
4、根据第一损失调整第一模型的模型参数后继续训练,直至训练结束时获得全局检测网络;
5、通过第二模型对将样本掌纹图像对应的局部频域图与样本掌纹图像融合获得的局部融合图像进行活体检测,获得第二样本图像中掌纹的第二检测结果;
6、根据第二检测结果与第二样本图像的标注类别确定第二损失;
7、根据第二损失调整第二模型的模型参数后继续训练,直至训练结束时获得局部检测网络;
8、获取待检测图像,待检测图像包括待检测对象的掌纹;
9、对待检测图像进行频域变换处理,获得全局频域图;
将待检测图像与全局频域图融合,获得全局融合图像;
10、通过全局检测网络,提取全局融合图像的图像特征,并基于图像特征输出待检测图像中待检测对象的掌纹属于屏幕翻拍掌纹图像中的掌纹的概率,作为第一检测结果;
11、对待检测图像进行掌纹检测,确定待检测图像中的掌纹区域;
12、按掌纹区域裁剪待检测图像,获得掌纹图像;
13、对掌纹图像进行频域变换处理,获得局部频域图;
14、将掌纹图像与局部频域图融合,获得局部融合图像;
15、通过局部检测网络,提取局部融合图像的图像特征,并基于图像特征输出待检测图像中待检测对象的掌纹属于纸片翻拍掌纹图像中的掌纹的概率,作为第二检测结果;
16、当第一检测结果表示待检测对象的掌纹属于屏幕翻拍掌纹图像中的掌纹的第一概率,且第一概率小于第一阈值时,则获取第二检测结果,第二检测结果表示待检测对象的掌纹属于纸片翻拍掌纹图像中的掌纹的第二概率;当第二概率小于第二阈值时,确定待检测对象的掌纹为活体掌纹;
17、当活体检测结果指示待检测图像中的掌纹为活体掌纹时,则对待检测图像中的掌纹进行掌纹识别,获得掌纹识别结果;根据掌纹识别结果对待检测对象进行身份认证。
应该理解的是,虽然上述流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,上述流程图的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图11所示,提供了一种图像处理装置1100,该装置可以采用软件模块或硬件模块,或者是二者的结合成为计算机设备的一部分,该装置具体包括:获取模块1102、全局检测模块1104、局部检测模块1106和确定模块1108,其中:
获取模块1102,用于获取待检测图像,所述待检测图像包括待检测对象的生物特征;
全局检测模块1104,用于对将所述待检测图像对应的全局频域图与所述待检测图像融合获得的全局融合图像进行活体检测,获得所述待检测图像对应的第一检测结果;当所述第一检测结果表示所述待检测图像属于屏幕翻拍图像时,则直接确定所述待检测图像未通过活体检测;
局部检测模块1106,用于当所述第一检测结果表示所述待检测图像不属于屏幕翻拍图像时,则基于所述待检测图像中的生物特征获得生物特征图像,对将所述生物特征图像对应的局部频域图与所述生物特征图像融合获得的局部融合图像进行活体检测,获得所述待检测图像对应的第二检测结果;
确定模块1108,用于根据所述第一检测结果与所述第二检测结果,确定所述待检测图像对应的活体检测结果。
在一个实施例中,以生物特征为掌纹为例,该装置具体包括:
获取模块1102,用于获取待检测图像,待检测图像包括待检测对象的掌纹;
全局检测模块1104,用于对将待检测图像对应的全局频域图与待检测图像融合获得的全局融合图像进行活体检测,获得待检测图像中掌纹的第一检测结果,当第一检测结果表示待检测图像属于屏幕翻拍图像时,则直接确定所述待检测图像未通过活体检测;
局部检测模块1106,用于当第一检测结果表示待检测图像不属于屏幕翻拍图像时,则基于所述待检测图像中的掌纹获得掌纹图像,对将掌纹图像对应的局部频域图与掌纹图像融合获得的局部融合图像进行活体检测,获得所述待检测图像对应的第二检测结果;
确定模块1108,用于根据第一检测结果与第二检测结果,确定待检测对象的掌纹的活体检测结果。
在一个实施例中,图像处理装置1100还包括融合模块,用于对待检测图像进行频域变换处理,获得全局频域图;将待检测图像与全局频域图融合,获得全局融合图像。
在一个实施例中,全局检测模块1104还用于将全局融合图像输入训练好的活体检测模型;通过活体检测模型中的全局检测网络,提取全局融合图像的图像特征,并基于图像特征输出待检测图像中待检测对象的掌纹属于屏幕翻拍掌纹图像中的掌纹的概率,作为第一检测结果。
在一个实施例中,局部检测模块1106还用于对待检测图像进行掌纹检测,确定待检测图像中的掌纹区域;按掌纹区域裁剪待检测图像,获得掌纹图像。
在一个实施例中,图像处理装置还包括融合模块,用于对掌纹图像进行频域变换处理,获得局部频域图;将掌纹图像与局部频域图融合,获得局部融合图像。
在一个实施例中,局部检测模块1106还用于将全局融合图像输入训练好的活体检测模型;通过活体检测模型中的局部检测网络,提取局部融合图像的图像特征,并基于图像特征输出待检测图像中待检测对象的掌纹属于纸片翻拍掌纹图像中的掌纹的概率,作为第二检测结果。
在一个实施例中,确定模块1108还用于当第一检测结果表示待检测对象的掌纹属于 屏幕翻拍掌纹图像中的掌纹的第一概率,且第一概率小于第一阈值时,则获取第二检测结果,第二检测结果表示待检测对象的掌纹属于纸片翻拍掌纹图像中的掌纹的第二概率;当第二概率小于第二阈值时,确定待检测对象的掌纹为活体掌纹。
在上一个实施例中,确定模块1108还用于当第一概率大于第一阈值时,则确定待检测对象的掌纹为屏幕翻拍掌纹图像中的掌纹;当第二概率大于第二阈值时,则确定待检测对象的掌纹为纸片翻拍掌纹图像中的掌纹。
在一个实施例中,图像处理装置还包括图像获取模块,用于获取原始的采集图像;将采集图像调整至第一预设尺寸,得到待检测图像;对采集图像进行掌纹检测,确定采集图像中的掌纹区域;从采集图像中裁剪出掌纹区域后,将掌纹区域调整至第二预设尺寸,得到掌纹图像。
在一个实施例中,图像处理装置还包括训练模块,训练模块包括样本图像获取单元,全局检测网络训练单元和局部检测网络训练单元;
样本图像获取单元用于获取第一样本集合和第二样本集合,第一样本集合与第二样本集合中的样本图像包括掌纹;
全局检测网络训练单元用于使用第一样本集合中的第一样本图像对基于神经网络的第一模型进行模型训练,得到全局检测网络;
局部检测网络训练单元用于根据第二样本集合中第二样本图像的掌纹区域获得样本掌纹图像,使用样本掌纹图像对基于神经网络的第二模型进行模型训练,得到局部检测网络。
在一个实施例中,全局检测网络训练单元具体用于通过第一模型对将第一样本图像对应的全局频域图与第一样本图像融合获得的全局融合图像进行活体检测,获得第一样本图像中掌纹的第一检测结果;根据第一检测结果与第一样本图像的标注类别确定第一损失;根据第一损失调整第一模型的模型参数后继续训练,直至训练结束时获得全局检测网络。
在一个实施例中,局部检测网络训练单元具体用于通过第二模型对将样本掌纹图像对应的局部频域图与样本掌纹图像融合获得的局部融合图像进行活体检测,获得第二样本图像中掌纹的第二检测结果;根据第二检测结果与第二样本图像的标注类别确定第二损失;根据第二损失调整第二模型的模型参数后继续训练,直至训练结束时获得局部检测网络。
在一个实施例中,图像处理装置还包括掌纹识别模块,用于当活体检测结果指示待检测图像中的掌纹为活体掌纹时,则对待检测图像中的掌纹进行掌纹识别,获得掌纹识别结果;根据掌纹识别结果对待检测对象进行身份认证。
在一个实施例中,如图12所示,提供了一种活体检测模型的处理装置1200,该装置可以采用软件模块或硬件模块,或者是二者的结合成为计算机设备的一部分,该装置具体包括:全局检测网络训练模块1202、局部检测网络训练模块1204和检测模型获取模块1206,其中:
全局检测网络获取模块1202,用于获取第一样本集合中的第一样本图像,通过基于 神经网络的第一模型对将所述第一样本图像对应的全局频域图与所述第一样本图像融合获得的全局融合图像进行活体检测,获得所述第一样本图像对应的第一检测结果,并根据基于所述第一检测结果与所述第一样本图像的标注类别所确定的第一损失调整所述第一模型的模型参数后返回所述获取第一样本集合中的第一样本图像的步骤继续训练,直至训练结束时获得所述全局检测网络;
局部检测网络获取模块1204,用于获取第二样本集合中的第二样本图像,根据所述第二样本图像的生物特征获得样本生物特征图像,通过基于神经网络的第二模型对将所述样本生物特征图像对应的局部频域图与所述样本生物特征图像融合获得的局部融合图像进行活体检测,获得所述第二样本图像对应的第二检测结果,并根据基于所述第二检测结果与所述第二样本图像的标注类别所确定的第二损失调整所述第二模型的模型参数后返回所述获取第二样本集合中的第二样本图像的步骤继续训练,直至训练结束时获得所述局部检测网络;
检测模型获取模块1206,用于根据所述全局检测网络与所述局部检测网络,获得用于对图像进行活体检测的活体检测模型。
在一个实施例中,以生物特征为掌纹为例,该装置具体包括:
全局检测网络训练模块1202,用于获取第一样本集合中的第一样本图像,通过基于神经网络的第一模型对将第一样本图像对应的全局频域图与第一样本图像融合获得的全局融合图像进行活体检测,获得第一样本图像中掌纹的第一检测结果,并根据基于第一检测结果与第一样本图像的标注类别所确定的第一损失调整第一模型的模型参数后返回获取第一样本集合中的第一样本图像的步骤继续训练,直至训练结束时获得全局检测网络;
局部检测网络训练模块1204,用于获取第二样本集合中的第二样本图像,根据第二样本图像的掌纹区域获得样本掌纹图像,通过基于神经网络的第二模型对将样本掌纹图像对应的局部频域图与样本掌纹图像融合获得的局部融合图像进行活体检测,获得第二样本图像中掌纹的第二检测结果,并根据基于第二检测结果与第二样本图像的标注类别所确定的第二损失调整第二模型的模型参数后返回获取第二样本集合中的第二样本图像的步骤继续训练,直至训练结束时获得局部检测网络;
检测模型获取模块1206,用于根据全局检测网络与局部检测网络,获得用于对图像进行掌纹活体检测的活体检测模型。
关于图像处理装置1100及活体检测模型的处理装置1200的具体限定可以参见上文中对于图像处理方法及活体检测模型的处理方法的限定,在此不再赘述。
上述图像处理装置及活体检测模型的处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端或服务器,其内 部结构图可以如图13所示。当该计算机设备为终端时,还可以包括图像采集装置,如摄像头等。该计算机设备包括通过系统总线连接的处理器、存储器和网络接口。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机可读指令。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的网络接口用于与外部的其他计算机设备通过网络连接通信。该计算机可读指令被处理器执行时以实现一种图像处理方法和/或活体检测模型的处理方法。
本领域技术人员可以理解,图13中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,还提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机可读指令,该处理器执行计算机可读指令时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机可读存储介质,存储有计算机可读指令,该计算机可读指令被处理器执行时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述各方法实施例中的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种图像处理方法,由计算机设备执行,所述方法包括:
    获取待检测图像,所述待检测图像包括待检测对象的生物特征;
    将所述待检测图像对应的全局频域图与所述待检测图像融合,获得全局融合图像;
    对所述全局融合图像进行活体检测,获得所述待检测图像对应的第一检测结果;
    当所述第一检测结果表示所述待检测图像属于屏幕翻拍图像时,则直接确定所述待检测图像未通过活体检测;
    当所述第一检测结果表示所述待检测图像不属于屏幕翻拍图像时,则
    基于所述待检测图像中的生物特征获得生物特征图像,将所述生物特征图像对应的局部频域图与所述生物特征图像融合,获得局部融合图像,对所述局部融合图像进行活体检测,获得所述待检测图像对应的第二检测结果,并根据所述第一检测结果与所述第二检测结果,确定所述待检测图像对应的活体检测结果。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    对所述待检测图像进行频域变换处理,获得所述全局频域图。
  3. 根据权利要求1所述的方法,其特征在于,所述对所述全局融合图像进行活体检测,获得所述待检测图像对应的第一检测结果,包括:
    将所述全局融合图像输入训练好的活体检测模型;
    通过所述活体检测模型中的全局检测网络,提取所述全局融合图像的图像特征,并基于所述图像特征输出所述待检测图像属于屏幕翻拍图像的概率,作为第一检测结果。
  4. 根据权利要求1所述的方法,其特征在于,所述基于所述待检测图像中的生物特征获得生物特征图像,包括:
    对所述待检测图像进行生物特征检测,确定所述待检测图像中的生物特征区域;
    按所述生物特征区域裁剪所述待检测图像,获得所述生物特征图像。
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    对所述生物特征图像进行频域变换处理,获得所述局部频域图。
  6. 根据权利要求1所述的方法,其特征在于,所述对所述局部融合图像进行活体检测,获得所述待检测图像对应的第二检测结果,包括:
    将所述局部融合图像输入训练好的活体检测模型;
    通过所述活体检测模型中的局部检测网络,提取所述局部融合图像的图像特征,并基于所述图像特征输出所述待检测图像属于纸片翻拍图像的概率,作为第二检测结果。
  7. 根据权利要求1所述的方法,其特征在于,所述根据所述第一检测结果与所述第二检测结果,确定所述待检测图像对应的活体检测结果,包括:
    当所述第一检测结果表示所述待检测对象的生物特征属于屏幕翻拍图像中的生物特征的第一概率,且所述第一概率小于第一阈值时,则
    获取所述第二检测结果,所述第二检测结果表示所述待检测对象的生物特征属于纸片 翻拍图像中的生物特征的第二概率;
    当所述第二概率小于第二阈值时,确定所述待检测图像通过活体检测。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    当所述第一概率大于第一阈值时,则确定所述待检测对象的生物特征为屏幕翻拍图像中的生物特征;
    当所述第二概率大于第二阈值时,则确定所述待检测对象的生物特征为纸片翻拍图像中的生物特征。
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取原始的采集图像;
    将所述采集图像调整至第一预设尺寸,得到所述待检测图像;
    对所述采集图像进行生物特征检测,确定所述采集图像中的生物特征区域;
    从所述采集图像中裁剪出所述生物特征区域后,将所述生物特征区域调整至第二预设尺寸,得到所述生物特征图像。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取第一样本集合和第二样本集合,所述第一样本集合与所述第二样本集合中的样本图像包括生物特征;
    使用所述第一样本集合中的第一样本图像对基于神经网络的第一模型进行模型训练,得到全局检测网络;
    根据所述第二样本集合中第二样本图像的生物特征获得样本生物特征图像,使用所述样本生物特征图像对基于神经网络的第二模型进行模型训练,得到局部检测网络。
  11. 根据权利要求10所述的方法,其特征在于,所述使用所述第一样本集合中的第一样本图像对基于神经网络的第一模型进行模型训练,得到全局检测网络,包括:
    将所述第一样本图像对应的全局频域图与所述第一样本图像融合,获得全局融合图像,通过所述第一模型对所述全局融合图像进行活体检测,获得所述第一样本图像对应的第一检测结果;
    根据所述第一检测结果与所述第一样本图像的标注类别确定第一损失;
    根据所述第一损失调整所述第一模型的模型参数后继续训练,直至训练结束时获得所述全局检测网络。
  12. 根据权利要求10所述的方法,其特征在于,所述使用所述样本生物特征图像对基于神经网络的第二模型进行模型训练,得到局部检测网络,包括:
    将所述样本生物特征图像对应的局部频域图与所述样本生物特征图像融合,获得局部融合图像;
    通过所述第二模型对所述局部融合图像进行活体检测,获得所述第二样本图像对应的第二检测结果;
    根据所述第二检测结果与所述第二样本图像的标注类别确定第二损失;
    根据所述第二损失调整所述第二模型的模型参数后继续训练,直至训练结束时获得所述局部检测网络。
  13. 根据权利要求1至12任一项所述的方法,其特征在于,所述方法还包括:
    当所述活体检测结果指示所述待检测图像通过活体检测时,则
    对所述待检测图像中的生物特征进行识别,获得识别结果;
    根据所述识别结果对所述待检测对象进行身份认证。
  14. 一种活体检测模型的处理方法,由计算机设备执行,所述方法包括:
    获取第一样本集合中的第一样本图像,将所述第一样本图像对应的全局频域图与所述第一样本图像融合获得的全局融合图像,通过基于神经网络的第一模型对所述全局融合图像进行活体检测,获得所述第一样本图像对应的第一检测结果,基于所述第一检测结果与所述第一样本图像的标注类别确定第一损失,并根据所述第一损失调整所述第一模型的模型参数后,返回所述获取第一样本集合中的第一样本图像的步骤继续训练,直至训练结束时获得全局检测网络;
    获取第二样本集合中的第二样本图像,根据所述第二样本图像的生物特征获得样本生物特征图像,将所述样本生物特征图像对应的局部频域图与所述样本生物特征图像融合,获得局部融合图像,通过基于神经网络的第二模型对所述局部融合图像进行活体检测,获得所述第二样本图像对应的第二检测结果,基于所述第二检测结果与所述第二样本图像的标注类别确定第二损失,并根据所述第二损失调整所述第二模型的模型参数后,返回所述获取第二样本集合中的第二样本图像的步骤继续训练,直至训练结束时获得局部检测网络;
    根据所述全局检测网络与所述局部检测网络,获得用于对图像进行活体检测的活体检测模型。
  15. 一种图像处理装置,所述装置包括:
    获取模块,用于获取待检测图像,所述待检测图像包括待检测对象的生物特征;
    全局检测模块,用于将所述待检测图像对应的全局频域图与所述待检测图像融合,获得全局融合图像,对所述全局融合图像进行活体检测,获得所述待检测图像对应的第一检测结果;当所述第一检测结果表示所述待检测图像属于屏幕翻拍图像时,则直接确定所述待检测图像未通过活体检测;
    局部检测模块,用于当所述第一检测结果表示所述待检测图像不属于屏幕翻拍图像时,则基于所述待检测图像中的生物特征获得生物特征图像,将所述生物特征图像对应的局部频域图与所述生物特征图像融合,获得局部融合图像,对所述局部融合图像进行活体检测,获得所述待检测图像对应的第二检测结果;
    确定模块,用于根据所述第一检测结果与所述第二检测结果,确定所述待检测图像对应的活体检测结果。
  16. 根据权利要求15所述的装置,其特征在于,所述确定模块还用于当所述第一检 测结果表示所述待检测对象的生物特征属于屏幕翻拍图像中的生物特征的第一概率,且所述第一概率小于第一阈值时,则获取所述第二检测结果,所述第二检测结果表示所述待检测对象的生物特征属于纸片翻拍图像中的生物特征的第二概率;当所述第二概率小于第二阈值时,确定所述待检测图像通过活体检测。
  17. 一种活体检测模型的处理装置,所述装置包括:
    全局检测网络获取模块,用于获取第一样本集合中的第一样本图像,将所述第一样本图像对应的全局频域图与所述第一样本图像融合获得的全局融合图像,通过基于神经网络的第一模型对所述全局融合图像进行活体检测,获得所述第一样本图像对应的第一检测结果,基于所述第一检测结果与所述第一样本图像的标注类别确定第一损失,并根据所述第一损失调整所述第一模型的模型参数后,返回所述获取第一样本集合中的第一样本图像的步骤继续训练,直至训练结束时获得所述全局检测网络;
    局部检测网络获取模块,用于获取第二样本集合中的第二样本图像,根据所述第二样本图像的生物特征获得样本生物特征图像,将所述样本生物特征图像对应的局部频域图与所述样本生物特征图像融合,获得局部融合图像,通过基于神经网络的第二模型对所述局部融合图像进行活体检测,获得所述第二样本图像对应的第二检测结果,基于所述第二检测结果与所述第二样本图像的标注类别确定第二损失,并根据所述第二损失调整所述第二模型的模型参数后,返回所述获取第二样本集合中的第二样本图像的步骤继续训练,直至训练结束时获得所述局部检测网络;
    检测模型获取模块,用于根据所述全局检测网络与所述局部检测网络,获得用于对图像进行活体检测的活体检测模型。
  18. 一种计算机设备,包括存储器和一个或多个处理器,所述存储器存储有计算机可读指令,所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器实现权利要求1至14中任一项所述的方法的步骤。
  19. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器实现权利要求1至14中任一项所述的方法的步骤。
  20. 一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现权利要求1至14中任一项所述的方法的步骤。
PCT/CN2022/079872 2021-04-02 2022-03-09 图像处理方法、装置、设备、存储介质计算机程序产品 WO2022206319A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/989,254 US20230086552A1 (en) 2021-04-02 2022-11-17 Image processing method and apparatus, device, storage medium, and computer program product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110359536.2A CN112801057B (zh) 2021-04-02 2021-04-02 图像处理方法、装置、计算机设备和存储介质
CN202110359536.2 2021-04-02

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/989,254 Continuation US20230086552A1 (en) 2021-04-02 2022-11-17 Image processing method and apparatus, device, storage medium, and computer program product

Publications (1)

Publication Number Publication Date
WO2022206319A1 true WO2022206319A1 (zh) 2022-10-06

Family

ID=75816144

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/079872 WO2022206319A1 (zh) 2021-04-02 2022-03-09 图像处理方法、装置、设备、存储介质计算机程序产品

Country Status (3)

Country Link
US (1) US20230086552A1 (zh)
CN (1) CN112801057B (zh)
WO (1) WO2022206319A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118505687A (zh) * 2024-07-17 2024-08-16 合肥中科类脑智能技术有限公司 光伏板缺陷检测方法、存储介质和电子设备

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801057B (zh) * 2021-04-02 2021-07-13 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备和存储介质
CN113313722B (zh) * 2021-06-10 2023-09-12 浙江传媒学院 一种牙根图像交互标注方法
CN113362227B (zh) * 2021-06-22 2023-07-21 北京百度网讯科技有限公司 图像处理方法、装置、电子设备及存储介质
CN113344000A (zh) * 2021-06-29 2021-09-03 南京星云数字技术有限公司 证件翻拍识别方法、装置、计算机设备和存储介质
CN113592998A (zh) * 2021-06-29 2021-11-02 北京百度网讯科技有限公司 重光照图像的生成方法、装置及电子设备
CN113569707A (zh) * 2021-07-23 2021-10-29 北京百度网讯科技有限公司 活体检测方法、装置、电子设备以及存储介质
CN116246352A (zh) * 2021-12-07 2023-06-09 腾讯科技(深圳)有限公司 信息验证方法及相关装置
CN116805360B (zh) * 2023-08-21 2023-12-05 江西师范大学 一种基于双流门控渐进优化网络的显著目标检测方法
CN117037221B (zh) * 2023-10-08 2023-12-29 腾讯科技(深圳)有限公司 活体检测方法、装置、计算机设备及存储介质
CN117994636B (zh) * 2024-04-03 2024-07-12 华中科技大学同济医学院附属协和医院 基于交互学习的穿刺目标识别方法、系统及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150023558A1 (en) * 2012-03-30 2015-01-22 Muhittin Gokmen System and method for face detection and recognition using locally evaluated zernike and similar moments
CN110163078A (zh) * 2019-03-21 2019-08-23 腾讯科技(深圳)有限公司 活体检测方法、装置及应用活体检测方法的服务系统
CN110569760A (zh) * 2019-08-27 2019-12-13 东南大学 一种基于近红外和远程光电体积描记术的活体检测方法
CN111126493A (zh) * 2019-12-25 2020-05-08 东软睿驰汽车技术(沈阳)有限公司 深度学习模型的训练方法、装置、电子设备及存储介质
CN112464690A (zh) * 2019-09-06 2021-03-09 广州虎牙科技有限公司 活体识别方法、装置、电子设备及可读存储介质
CN112507934A (zh) * 2020-12-16 2021-03-16 平安银行股份有限公司 活体检测方法、装置、电子设备及存储介质
CN112801057A (zh) * 2021-04-02 2021-05-14 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备和存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017514108A (ja) * 2014-03-06 2017-06-01 クアルコム,インコーポレイテッド マルチスペクトル超音波撮像
CN112115852A (zh) * 2020-09-17 2020-12-22 广东光速智能设备有限公司 一种使用rgb红外相机的活体检测方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150023558A1 (en) * 2012-03-30 2015-01-22 Muhittin Gokmen System and method for face detection and recognition using locally evaluated zernike and similar moments
CN110163078A (zh) * 2019-03-21 2019-08-23 腾讯科技(深圳)有限公司 活体检测方法、装置及应用活体检测方法的服务系统
CN110569760A (zh) * 2019-08-27 2019-12-13 东南大学 一种基于近红外和远程光电体积描记术的活体检测方法
CN112464690A (zh) * 2019-09-06 2021-03-09 广州虎牙科技有限公司 活体识别方法、装置、电子设备及可读存储介质
CN111126493A (zh) * 2019-12-25 2020-05-08 东软睿驰汽车技术(沈阳)有限公司 深度学习模型的训练方法、装置、电子设备及存储介质
CN112507934A (zh) * 2020-12-16 2021-03-16 平安银行股份有限公司 活体检测方法、装置、电子设备及存储介质
CN112801057A (zh) * 2021-04-02 2021-05-14 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备和存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118505687A (zh) * 2024-07-17 2024-08-16 合肥中科类脑智能技术有限公司 光伏板缺陷检测方法、存储介质和电子设备

Also Published As

Publication number Publication date
CN112801057B (zh) 2021-07-13
US20230086552A1 (en) 2023-03-23
CN112801057A (zh) 2021-05-14

Similar Documents

Publication Publication Date Title
WO2022206319A1 (zh) 图像处理方法、装置、设备、存储介质计算机程序产品
WO2019096029A1 (zh) 活体识别方法、存储介质和计算机设备
CN108009528B (zh) 基于Triplet Loss的人脸认证方法、装置、计算机设备和存储介质
WO2020151489A1 (zh) 基于面部识别的活体检测的方法、电子设备和存储介质
TW201911130A (zh) 一種翻拍影像識別方法及裝置
WO2018086543A1 (zh) 活体判别方法、身份认证方法、终端、服务器和存储介质
CN111754396B (zh) 脸部图像处理方法、装置、计算机设备和存储介质
JP2020523665A (ja) 生体検出方法及び装置、電子機器並びに記憶媒体
CN111597938B (zh) 活体检测、模型训练方法及装置
WO2022033220A1 (zh) 人脸活体检测方法、系统、装置、计算机设备和存储介质
CN112052831B (zh) 人脸检测的方法、装置和计算机存储介质
CN111191568B (zh) 翻拍图像识别方法、装置、设备及介质
CN111275685B (zh) 身份证件的翻拍图像识别方法、装置、设备及介质
WO2022033219A1 (zh) 人脸活体检测方法、系统、装置、计算机设备和存储介质
CN111339897B (zh) 活体识别方法、装置、计算机设备和存储介质
CN113642639B (zh) 活体检测方法、装置、设备和存储介质
WO2022247539A1 (zh) 活体检测方法、估算网络处理方法、装置、计算机设备和计算机可读指令产品
WO2022068931A1 (zh) 非接触指纹识别方法、装置、终端及存储介质
CN112434647A (zh) 一种人脸活体检测方法
CN107480628B (zh) 一种人脸识别方法及装置
JP3962517B2 (ja) 顔面検出方法及びその装置、コンピュータ可読媒体
CN112308035A (zh) 图像检测方法、装置、计算机设备和存储介质
CN112101479B (zh) 一种发型识别方法及装置
Huang et al. Dual fusion paired environmental background and face region for face anti-spoofing
Bresan et al. Exposing presentation attacks by a combination of multi-intrinsic image properties, convolutional networks and transfer learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22778503

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.02.2024)

122 Ep: pct application non-entry in european phase

Ref document number: 22778503

Country of ref document: EP

Kind code of ref document: A1