WO2023060756A1 - 活体检测方法、设备、可读存储介质及计算机程序产品 - Google Patents

活体检测方法、设备、可读存储介质及计算机程序产品 Download PDF

Info

Publication number
WO2023060756A1
WO2023060756A1 PCT/CN2021/138879 CN2021138879W WO2023060756A1 WO 2023060756 A1 WO2023060756 A1 WO 2023060756A1 CN 2021138879 W CN2021138879 W CN 2021138879W WO 2023060756 A1 WO2023060756 A1 WO 2023060756A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
target
living body
body detection
focus
Prior art date
Application number
PCT/CN2021/138879
Other languages
English (en)
French (fr)
Inventor
谭圣琦
吴泽衡
徐倩
Original Assignee
深圳前海微众银行股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海微众银行股份有限公司 filed Critical 深圳前海微众银行股份有限公司
Publication of WO2023060756A1 publication Critical patent/WO2023060756A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection

Definitions

  • the present application relates to the technical field of face recognition, and in particular to a living body detection method, device, readable storage medium and program product.
  • the depth information of the face image is usually estimated based on the neural network model, and then the focus shooting is performed according to the depth information to obtain the focus images of different areas of the face, and the image quality of the focus image is further used to judge whether the object is a living body.
  • the process of estimating the depth information of the face image according to the neural network model is essentially: The process of estimating the three-dimensional feature information of the face by using the three-dimensional feature information. Therefore, the accuracy of estimating the depth information of the face through the neural network model is not high, which will affect the accuracy of liveness detection.
  • the main purpose of this application is to provide a living body detection method, device, readable storage medium and program product, aiming at solving the technical problem of low accuracy of living body detection in the prior art.
  • the present application provides a living body detection method, the living body detection method is applied to a living body detection device, and the living body detection method includes:
  • a living body detection is performed on the face to be detected to obtain a living body detection result.
  • the sharpness map includes at least one gray value
  • the target face depth map includes at least one face depth feature value
  • the fusion of each of the sharpness maps and each of the target focus parameters comprises:
  • the target focus parameters corresponding to each of the sharpness maps are weighted and fused to obtain the face depth feature value at the target position in the target face depth map.
  • the target focus parameters corresponding to each of the sharpness maps are weighted and fused to obtain the target position in the target face depth map.
  • the steps of the face depth feature value include:
  • Weighted aggregation is performed on each of the target focus parameters according to each of the weight values to obtain a face depth feature value at the target position.
  • the step of performing focus shooting on each key area of the human face to be detected, and obtaining the in-focus face image corresponding to each of the key areas and the corresponding target focus parameters includes:
  • the steps of the living body detection method before the step of performing focused shooting on each key area of the human face to be detected, and obtaining the in-focus face image corresponding to each of the key areas and the corresponding target focus parameters, the steps of the living body detection method also includes:
  • focus candidate areas are divided for the face to be detected to obtain each of the key areas.
  • the step of performing liveness detection on the face to be detected according to the target face depth map, and obtaining a liveness detection result includes:
  • the image classification result it is judged whether the human face to be detected is a living human face, and the living body detection result is obtained.
  • the step of acquiring the sharpness map corresponding to each of the in-focus facial images comprises:
  • Gaussian filtering is performed on the second-order gradient map to obtain a sharpness map corresponding to the focused face image.
  • the present application also provides a living body detection device, the living body detection device is a virtual device, and the living body detection device is applied to a living body detection device, and the living body detection device includes:
  • Focus shooting module used for focusing and shooting each key area of the human face to be detected, and obtaining the focused face image corresponding to each of the key areas and the corresponding target focusing parameters
  • a fusion module configured to obtain a sharpness map corresponding to each of the focused face images, and fuse each of the sharpness maps with each of the target focus parameters to obtain a target face depth map corresponding to the face to be detected;
  • the living body detection module is configured to perform living body detection on the face to be detected according to the target face depth map to obtain a living body detection result.
  • the sharpness map includes at least one gray value
  • the target face depth map includes at least one face depth feature value
  • the fusion module is also used for:
  • the target focus parameters corresponding to each of the sharpness maps are weighted and fused to obtain the face depth feature value at the target position in the target face depth map.
  • the fusion module is also used for:
  • Weighted aggregation is performed on each of the target focus parameters according to each of the weight values to obtain a face depth feature value at the target position.
  • the focusing and shooting module is also used for:
  • the living body detection device is also used for:
  • focus candidate areas are divided for the face to be detected to obtain each of the key areas.
  • the living body detection module is also used for:
  • the image classification result it is judged whether the human face to be detected is a living human face, and the living body detection result is obtained.
  • the fusion module is also used for:
  • Gaussian filtering is performed on the second-order gradient map to obtain a sharpness map corresponding to the focused face image.
  • the present application also provides a living body detection device.
  • the living body detection device is a physical device, and the living body detection device includes: a memory, a processor, and the living body stored in the memory and operable on the processor.
  • the program of the detection method when the program of the living body detection method is executed by the processor, can realize the steps of the above-mentioned living body detection method.
  • the present application also provides a readable storage medium, on which a program for realizing the living body detection method is stored, and when the program of the living body detection method is executed by a processor, the steps of the above-mentioned living body detection method are realized.
  • the present application also provides a computer program product, including a computer program, and when the computer program is executed by a processor, the steps of the above-mentioned living body detection method are realized.
  • the present application provides a living body detection method, device, readable storage medium and program product.
  • the depth information of the human face image is estimated based on the neural network model, and then focus shooting is performed according to the depth information. Focused images of different areas of the face, further using the image quality of the focused image to determine whether the subject is a technical means of living body, the application first focuses on each key area of the human face to be detected, and obtains the focused face corresponding to each of the key areas image and the corresponding target focus parameters, obtain the sharpness map corresponding to each of the focused face images, and fuse each of the sharpness maps with each of the target focus parameters to obtain the target face corresponding to the face to be detected
  • the depth map realizes the purpose of directly calculating the depth information of the face to be detected according to the distribution of target focus parameters in different key areas of the face image, and the distribution of the sharpness of each focus face image.
  • the target focus parameters The distribution of the face and the distribution of the sharpness of each focused face image reflect the three-dimensional feature information of the face to a certain extent, and then realize the purpose of calculating the face depth map based on the three-dimensional feature information of the face.
  • the method of estimating the three-dimensional feature information of the face based on the two-dimensional feature information of the face improves the accuracy of the estimation of the depth information of the face.
  • liveness detection is performed on the face to be detected, and the liveness detection result is obtained, which can realize the purpose of carrying out liveness detection based on face depth information with higher accuracy, which overcomes the existing technology
  • the accuracy of predicting the depth information of the face through the neural network model is not high, which will affect the accuracy of the liveness detection technology, and improve the accuracy of the liveness detection.
  • FIG. 1 is a schematic flow diagram of the first embodiment of the living body detection method of the present application
  • FIG. 2 is a schematic diagram of the distribution of face key points described in the living body detection method of the present application.
  • FIG. 3 is a schematic diagram of the distribution of the key regions described in the living body detection method of the present application.
  • FIG. 4 is a schematic flow diagram of the second embodiment of the living body detection method of the present application.
  • FIG. 5 is a schematic diagram of a device structure of a hardware operating environment involved in a living body detection method in an embodiment of the present application.
  • the embodiment of the present application provides a living body detection method.
  • the living body detection method includes:
  • Step S10 focus and shoot each key area of the face to be detected, and obtain the in-focus face image corresponding to each key area and the corresponding target focus parameters;
  • the key areas include but are not limited to the nose tip area, eye area, eyebrow area, lip area, and cheek area of the face to be detected, and the target focus parameter is focus The focal length of the camera at the time of shooting.
  • each key area of the face to be detected is respectively focused and photographed, and the corresponding key area of each key area is obtained.
  • the face images are captured by focusing, and the target focus parameters when capturing each of the facial images are taken as the target focus parameters.
  • the focus shooting process is to continuously adjust the camera focal length parameters to make the corresponding key area the clearest shooting process, and finally obtain the face image captured when the focus shooting face image is the clearest corresponding key area, at this time
  • the camera focal length parameter of is the target focus parameter.
  • the step of focusing and shooting each key area of the human face to be detected, and obtaining the in-focus face image corresponding to each of the key areas and the corresponding target focus parameters includes:
  • Step S11 by adjusting the focal length parameters of the camera, focus and shoot each of the key areas, respectively, to obtain the initial in-focus face image corresponding to each of the key areas and the corresponding target focus parameters;
  • a camera focal length parameter is adjusted to detect whether a corresponding key area is clear. If it is clear, then the key area is photographed to obtain the initial focus face image corresponding to the key area, and the camera focal length parameter when the camera captures the initial focus face image is used as the target focus parameter; if it is not clear, then Return to the execution step: adjust the camera focal length parameter.
  • Step S12 obtaining the coordinates of the key points of the face corresponding to each of the initially focused human face images, and then performing image alignment on each of the initially focused human face images according to the coordinates of the key points of the human face, to obtain the face image.
  • the coordinates of each of the initially focused face images at the same key point of the face are acquired to obtain the coordinates of each of the key points of the face.
  • the pixel point coordinates in the image are aligned with the pixel point coordinates in the initial focus face image corresponding to the reference coordinates, and the other initial focus face images after alignment and the initial focus face image are all used as the focus face image .
  • the steps of the living body detection method further include:
  • Step A10 performing face key point detection on the face to be detected to obtain face key point information
  • a global focus shooting is performed on the face to be detected to obtain a global focus shooting image.
  • Carrying out face detection on the global focus shot image if the face detection passes, then carry out face key point detection on the global focus shot image to obtain face key point information, if the face detection fails, then determine the It states that the face to be detected is not the target face, and output a prompt message that the face recognition fails.
  • Step A20 according to the key point information of the human face, divide focus candidate areas for the face to be detected, and obtain each of the key areas.
  • the human face key point information includes human face key point coordinates.
  • FIG. 2 shows the A schematic diagram of the distribution of the key points of the human face, and a schematic diagram of the distribution of the key areas shown in Figure 3, wherein points 1 to 68 are all key points of the human face, and each framed area is the key area.
  • Step S20 acquiring a sharpness map corresponding to each of the in-focus face images, and fusing each of the sharpness maps with each of the target focus parameters to obtain a target face depth map corresponding to the face to be detected;
  • the sharpness map is a pixel value matrix composed of grayscale values corresponding to pixels, and the grayscale values are used to represent the sharpness of corresponding pixel points.
  • the gray values corresponding to the pixels in each of the focused human face images are calculated to obtain the sharpness maps corresponding to each of the focused human face images, and according to each of the sharpness
  • the size of the gray value of the image at the same pixel position is weighted and fused to the target focus parameters corresponding to each of the sharpness images to obtain the face depth feature value at each same pixel position, and then according to each of the same
  • the feature values of the face depths are combined into a matrix to obtain the target face depth map.
  • the step of obtaining the sharpness map corresponding to each of the focused face images includes:
  • Step S21 calculating the second-order gradient map corresponding to the focused face image
  • Step S22 performing Gaussian filtering on the second-order gradient map to obtain a sharpness map corresponding to the focused face image.
  • the second-order gradient value of each pixel in the focused face image is calculated to obtain a second-order gradient map, and Gaussian filtering is performed on the second-order gradient map to obtain the focused
  • the sharpness map corresponding to the face image wherein, in an embodiment, the second-order gradient value of the image can be calculated according to the Laplacian operator.
  • Step S30 according to the depth map of the target face, perform liveness detection on the face to be detected, and obtain a liveness detection result.
  • feature extraction is performed on the target face depth map according to a preset feature extraction model to obtain output face depth features.
  • the face depth feature is a face depth feature obtained by feature extraction of real face depth information.
  • the step of performing biopsy detection on the face to be detected according to the feature similarity, and obtaining a biopsy detection result includes:
  • the feature similarity is greater than the preset feature similarity threshold, it is determined that the face to be detected is a living human face, and the living body detection result is a living body detection pass; if the feature similarity is not greater than the preset feature similarity threshold, it is determined that the face to be detected is not a living human face, and the living body detection result is that the living body detection fails.
  • the step of performing liveness detection on the face to be detected according to the depth map of the target face, and obtaining the liveness detection result includes:
  • Step S31 classifying the target face depth map according to a preset image classification model to obtain an image classification result
  • step S32 according to the image classification result, it is judged whether the human face to be detected is a living human face, and the living body detection result is obtained.
  • the preset image classification model may be a binary classification model or a multi-classification model.
  • binary classification is performed on the target face depth map according to a preset image classification model to obtain a binary classification label. If the binary classification label is a preset target binary classification label, it is determined that the face to be detected is a live human face, and the living body detection result is a living body detection pass; if the binary classification label is not the predicted target binary classification label , then it is determined that the face to be detected is not a living human face, and the living body detection result is that the living body detection fails.
  • the embodiment of the present application provides a living body detection method. Compared with the prior art, which uses a neural network model to estimate the depth information of a human face image, and then performs focus shooting according to the depth information to obtain focused images of different areas of the human face, Further using the image quality of the focused image to determine whether the subject is a living body, the embodiments of the present application first focus and shoot each key area of the face to be detected, and obtain the focused face image corresponding to each key area and the corresponding target focus.
  • Parameters obtain the sharpness map corresponding to each of the focused face images, and fuse each of the sharpness maps with each of the target focus parameters to obtain the target face depth map corresponding to the face to be detected, and realize the basis
  • the distribution of the sharpness of the face image reflects the three-dimensional feature information of the face to a certain extent, and then realizes the purpose of calculating the depth map of the face based on the three-dimensional feature information of the face, compared with the two-dimensional feature of the face
  • the method of estimating the three-dimensional feature information of the face improves the accuracy of estimating the depth information of the face.
  • liveness detection is performed on the face to be detected, and the liveness detection result is obtained, which can realize the purpose of carrying out liveness detection based on face depth information with higher accuracy, which overcomes the existing technology
  • the accuracy of predicting the depth information of the face through the neural network model is not high, which will affect the accuracy of the liveness detection technology, and improve the accuracy of the liveness detection.
  • the sharpness map includes at least one gray value
  • the target face depth map includes at least one face depth feature value
  • each of the sharpness The step of obtaining the target human face depth map corresponding to the human face to be detected by merging the degree map and each of the target focusing parameters comprises:
  • Step B10 acquiring the gray value of each of the sharpness images at the same target position
  • Step B20 Perform weighted fusion of the target focus parameters corresponding to each of the sharpness maps according to each gray value at the target position, to obtain the face depth features at the target position in the target face depth map value.
  • the target position is the position of the pixel in the sharpness map, that is, the position of the pixel.
  • the gray value of each sharpness image at the same target position is obtained, and then for each gray value at the same target position, the According to the weight value corresponding to each gray value on the same target position, weighted fusion is performed on each of the target focus parameters according to each of the weight values to obtain the face at the target position.
  • Depth eigenvalues, the depth eigenvalues of each face form a matrix according to the arrangement of each target position, and the target face depth map can be obtained.
  • the target focus parameters corresponding to each of the sharpness maps are weighted and fused to obtain the face depth at the target position in the target face depth map
  • the steps for eigenvalues include:
  • Step B21 calculating the weight value of the target focus parameter corresponding to each gray value according to the size of each gray value at the target position;
  • each grayscale value at the target position is input into a preset exponential function to obtain an exponential function value corresponding to each grayscale value, and then calculate the relationship between each exponential function value and each The ratio of the sum of the exponential function values to obtain the weight value of the target focus parameter corresponding to each gray value.
  • the calculation method of the weight value is as follows:
  • W i (x, y) is the weight value corresponding to the gray value at the target position where the i-th position coordinate is (x, y)
  • p i (x, y) is the i-th position coordinate is (x , y) is the gray value at the target position
  • n is the number of gray values at the target position where the position coordinates are (x, y).
  • Step B22 performing weighted aggregation on each of the target focus parameters according to each of the weight values, to obtain the face depth feature value at the target position.
  • weighted aggregation includes weighted summation, weighted average, and the like.
  • the calculation method of weighted aggregation of each of the target focusing parameters to obtain the facial depth feature value is as follows:
  • p(x, y) is the face depth feature value at the target position with position coordinates (x, y)
  • W i (x, y) is the target position at the i-th position coordinates (x, y)
  • the weight value corresponding to the grayscale value on , f i is the target focus parameter corresponding to the grayscale value at the target position where the i-th position coordinate is (x, y).
  • each target focus parameter can reflect the depth information of the face, that is, The distribution of focus parameters of each target contains face depth information.
  • the accuracy is low.
  • the focus parameters of each target are weighted and fused, so that even if a certain in-focus face image is in an area with a sharp change in depth
  • the corresponding target focus parameters are not optimal focus parameters, and will not have an excessive impact on the calculation of the face depth map, making the calculation of the face depth map more stable and more accurate.
  • the present application provides a method for calculating the depth map of a target face, that is, firstly, the gray value of each of the sharpness maps at the same target position is obtained, and according to each gray value at the target position, each The target focus parameters corresponding to the sharpness map are weighted and fused to obtain the face depth feature value at the target position in the target face depth map, that is, according to the same pixel point position in each focused face image
  • the gray value of each target focus parameter is fused into the face depth feature value corresponding to each pixel position, and then the target face depth is obtained, instead of directly predicting the face depth directly based on the size distribution of each target focus parameter,
  • the target focus parameters corresponding to a certain in-focus face image in the region with a sharp depth change are not optimal focus parameters, it will not have an excessive impact on the calculation of the face depth map, making the face depth map
  • the calculation is more stable and accurate, and the accuracy and stability of live detection based on the more accurate and more stable face depth are higher.
  • FIG. 5 is a schematic diagram of a device structure of a hardware operating environment involved in the solution of the embodiment of the present application.
  • the living body detection device may include: a processor 1001 , such as a CPU, a memory 1005 , and a communication bus 1002 .
  • the communication bus 1002 is used to realize connection and communication between the processor 1001 and the memory 1005 .
  • the memory 1005 can be a high-speed RAM memory, or a stable memory (non-volatile memory), such as a disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001 .
  • the living body detection device may also include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency, radio frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like.
  • the rectangular user interface may include a display screen (Display), an input sub-module such as a keyboard (Keyboard), and the optional rectangular user interface may also include a standard wired interface and a wireless interface.
  • the network interface may include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the structure of the living body detection device shown in FIG. 5 does not constitute a limitation to the living body detection device, and may include more or less components than those shown in the figure, or combine certain components, or different components. layout.
  • the memory 1005 as a computer storage medium may include an operating system, a network communication module, and a living body detection program.
  • the operating system is a program that manages and controls the hardware and software resources of the living body detection device, and supports the running of the living body detection program and other software and/or programs.
  • the network communication module is used to realize the communication between various components inside the memory 1005, and communicate with other hardware and software in the living body detection system.
  • the processor 1001 is configured to execute the living body detection program stored in the memory 1005 to realize the steps of the living body detection method described in any one of the above.
  • the specific implementation manners of the living body detection device of the present application are basically the same as the embodiments of the above-mentioned living body detection method, and will not be repeated here.
  • the embodiment of the present application also provides a living body detection device, the living body detection device is applied to a living body detection device, and the living body detection device includes:
  • Focus shooting module used for focusing and shooting each key area of the human face to be detected, and obtaining the focused face image corresponding to each of the key areas and the corresponding target focusing parameters
  • a fusion module configured to obtain a sharpness map corresponding to each of the focused face images, and fuse each of the sharpness maps with each of the target focus parameters to obtain a target face depth map corresponding to the face to be detected;
  • the living body detection module is configured to perform living body detection on the face to be detected according to the target face depth map to obtain a living body detection result.
  • the sharpness map includes at least one gray value
  • the target face depth map includes at least one face depth feature value
  • the fusion module is also used for:
  • the target focus parameters corresponding to each of the sharpness maps are weighted and fused to obtain the face depth feature value at the target position in the target face depth map.
  • the fusion module is also used for:
  • Weighted aggregation is performed on each of the target focus parameters according to each of the weight values to obtain a face depth feature value at the target position.
  • the focusing and shooting module is also used for:
  • the living body detection device is also used for:
  • focus candidate areas are divided for the face to be detected to obtain each of the key areas.
  • the living body detection module is also used for:
  • the image classification result it is judged whether the human face to be detected is a living human face, and the living body detection result is obtained.
  • the fusion module is also used for:
  • Gaussian filtering is performed on the second-order gradient map to obtain a sharpness map corresponding to the focused face image.
  • the specific implementation manners of the living body detection device of the present application are basically the same as the above-mentioned embodiments of the living body detection method, and will not be repeated here.
  • the embodiment of the present application provides a readable storage medium, and the readable storage medium stores one or more programs, and the one or more programs can also be executed by one or more processors to implement The steps of the living body detection method described in any one of the above.
  • the embodiment of the present application provides a computer program product, and the computer program product includes one or more computer programs, and the one or more computer programs can also be executed by one or more processors to implement The steps of the living body detection method described in any one of the above.

Abstract

本申请公开了一种活体检测方法、设备、可读存储介质及程序产品,应用于活体检测设备,所述活体检测方法包括:对待检测人脸的各关键区域进行对焦拍摄,得到各所述关键区域对应的对焦人脸图像以及对应的目标对焦参数;获取各所述对焦人脸图像对应的清晰度图,并将各所述清晰度图和各所述目标对焦参数融合得到所述待检测人脸对应的目标人脸深度图;依据所述目标人脸深度图,对所述待检测人脸进行活体检测,得到活体检测结果。

Description

活体检测方法、设备、可读存储介质及计算机程序产品
优先权信息
本申请要求于2021年10月13日申请的、申请号为202111194548.0的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人脸识别技术领域,尤其涉及一种活体检测方法、设备、可读存储介质及程序产品。
背景技术
随着人脸识别技术的不断发展,活体检测成为了人脸识别过程中必不可少的一环。目前,通常依据神经网络模型预估人脸图像的深度信息,进而根据深度信息分别进行对焦拍摄获取人脸不同区域的对焦图像,进一步利用对焦图像的图像质量判断对象是否为活体。但是,由于拍摄的人脸图像只有人脸二维特征信息,而人脸深度信息为人脸三维特征信息,依据神经网络模型预估人脸图像的深度信息的过程实质上为:依据人脸的二维特征信息来预估人脸的三维特征信息的过程。所以,通过神经网络模型预估人脸的深度信息的准确度不高,进而将影响活体检测的精度。
发明内容
本申请的主要目的在于提供一种活体检测方法、设备、可读存储介质及程序产品,旨在解决现有技术中活体检测准确度低的技术问题。
为实现上述目的,本申请提供一种活体检测方法,所述活体检测方法应用于活体检测设备,所述活体检测方法包括:
对待检测人脸的各关键区域进行对焦拍摄,得到各所述关键区域对应的对焦人脸图像以及对应的目标对焦参数;
获取各所述对焦人脸图像对应的清晰度图,并将各所述清晰度图和各所述目标对焦参数融合得到所述待检测人脸对应的目标人脸深度图;
依据所述目标人脸深度图,对所述待检测人脸进行活体检测,得到活体检测结果。
在一实施例中,所述清晰度图至少包括一灰度值,所述目标人脸深度图至少包括一人脸深度特征值,所述将各所述清晰度图和各所述目标对焦参数融合得到所述待检测人脸对应的目标人脸深度图的步骤包括:
获取各所述清晰度图在同一目标位置上的灰度值;
依据所述目标位置上的各灰度值,将各所述清晰度图对应的目标对焦参数进行加权融合,得到所述目标人脸深度图中所述目标位置上的人脸深度特征值。
在一实施例中,所述依据所述目标位置上的各灰度值,将各所述清晰度图对应的目标对焦参数进行加权融合,得到所述目标人脸深度图中所述目标位置上的人脸深度特征值的步骤包括:
依据所述目标位置上的各灰度值的大小,计算各所述灰度值对应的目标对焦参数的权重值;
依据各所述权重值,对各所述目标对焦参数进行加权聚合,得到所述目标位置上的人脸深度特征值。
在一实施例中,所述对待检测人脸的各关键区域进行对焦拍摄,得到各所述关键区域对应的对焦人脸图像以及对应的目标对焦参数的步骤包括:
通过调整相机焦距参数,分别对各所述关键区域进行对焦拍摄,得到各所述关键区域对应的初始对焦人脸图像以及对应的目标对焦参数;
获取各所述初始对焦人脸图像对应的人脸关键点坐标,进而依据各所述人脸关键点坐标,对各所述初始对焦人脸图像进行图像对齐,得到各所述对焦人脸图像。
在一实施例中,在所述对待检测人脸的各关键区域进行对焦拍摄,得到各所述关键区域对应的对焦人脸图像以及对应的目标对焦参数的步骤之前,所述活体检测方法的步骤还包括:
对待检测人脸进行人脸关键点检测,得到人脸关键点信息;
依据所述人脸关键点信息,为所述待检测人脸划分对焦候选区域,得到各所述关键区域。
在一实施例中,所述依据所述目标人脸深度图,对所述待检测人脸进行活体检测,得到活体检测结果的步骤包括:
依据预设图像分类模型,对所述目标人脸深度图进行分类,得到图像分类结果;
依据所述图像分类结果,判断所述待检测人脸是否为活体人脸,得到所述活体检测结果。
在一实施例中,所述获取各所述对焦人脸图像对应的清晰度图的步骤包括:
计算所述对焦人脸图像对应的二阶梯度图;
对所述二阶梯度图进行高斯滤波,得到所述对焦人脸图像对应的清晰度图。
本申请还提供一种活体检测装置,所述活体检测装置为虚拟装置,且所述活体检测装置应用于活体检测设备,所述活体检测装置包括:
对焦拍摄模块,用于对待检测人脸的各关键区域进行对焦拍摄,得到各所述关键区域对应的对焦人脸图像以及对应的目标对焦参数;
融合模块,用于获取各所述对焦人脸图像对应的清晰度图,并将各所述清晰度图和各所述目标对焦参数融合得到所述待检测人脸对应的目标人脸深度图;
活体检测模块,用于依据所述目标人脸深度图,对所述待检测人脸进行活体检测,得到活体检测结果。
在一实施例中,所述清晰度图至少包括一灰度值,所述目标人脸深度图至少包括一人脸深度特征值,所述融合模块还用于:
获取各所述清晰度图在同一目标位置上的灰度值;
依据所述目标位置上的各灰度值,将各所述清晰度图对应的目标对焦参数进行加权融合,得到所述目标人脸深度图中所述目标位置上的人脸深度特征值。
在一实施例中,所述融合模块还用于:
依据所述目标位置上的各灰度值的大小,计算各所述灰度值对应的目标对焦参数的权重值;
依据各所述权重值,对各所述目标对焦参数进行加权聚合,得到所述目标位置上的人脸深度特征值。
在一实施例中,所述对焦拍摄模块还用于:
通过调整相机焦距参数,分别对各所述关键区域进行对焦拍摄,得到各所述关键区域对应的初始对焦人脸图像以及对应的目标对焦参数;
获取各所述初始对焦人脸图像对应的人脸关键点坐标,进而依据各所述人脸关键点坐标,对各所述初始对焦人脸图像进行图像对齐,得到各所述对焦人脸图像。
在一实施例中,所述活体检测装置还用于:
对待检测人脸进行人脸关键点检测,得到人脸关键点信息;
依据所述人脸关键点信息,为所述待检测人脸划分对焦候选区域,得到各所述关键区域。
在一实施例中,所述活体检测模块还用于:
依据预设图像分类模型,对所述目标人脸深度图进行分类,得到图像分类结果;
依据所述图像分类结果,判断所述待检测人脸是否为活体人脸,得到所述活体检测结果。
在一实施例中,所述融合模块还用于:
计算所述对焦人脸图像对应的二阶梯度图;
对所述二阶梯度图进行高斯滤波,得到所述对焦人脸图像对应的清晰度图。
本申请还提供一种活体检测设备,所述活体检测设备为实体设备,所述活体检测设备包括:存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的所述活体检测方法的程序,所述活体检测方法的程序被处理器执行时可实现如上述的活体检测方法的步骤。
本申请还提供一种可读存储介质,所述可读存储介质上存储有实现活体检测方法的程序,所述活体检测方法的程序被处理器执行时实现如上述的活体检测方法的步骤。
本申请还提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上述的活体检测方法的步骤。
本申请提供了一种活体检测方法、设备、可读存储介质及程序产品,相比于现有技术采用的依据神经网络模型预估人脸图像的深度信息,进而根据深度信息分别进行对焦拍摄获取人脸不同区域的对焦图像,进一步利用对焦图像的图像质量判断对象是否为活体的技术手段,本申请首先对待检测人脸的各关键区域进行对焦拍摄,得到各所述关键区域对应的对焦人脸图像以及对应的目标对焦参数,获取各所述对焦人脸图像对应的清晰度图,并将各所述清晰度图和各所述目标对焦参数融合得到所述待检测人脸对应的目标人脸深度图,实现了依据人脸图像不同关键区域的目标对焦参数的分布情况,以及各对焦人脸图像的清晰度的分布情况,直接计算待检测人脸的深度信息的目的,其中,目标对焦参数的分布情况以及各对焦人脸图像的清晰度的分布情况均在一定程度上反应了人脸的三维特征 信息,进而实现了依据人脸的三维特征信息计算人脸深度图的目的,相比于依据人脸二维特征信息预估人脸三维特征信息的方式,提升了人脸深度信息预估的准确性。进而依据所述目标人脸深度图,对所述待检测人脸进行活体检测,得到活体检测结果,即可实现依据准确度更高的人脸深度信息进行活体检测的目的,克服了现有技术中通过神经网络模型预估人脸的深度信息的准确度不高,进而将影响活体检测的精度的技术缺陷,提升了活体检测的准确度。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图。
图1为本申请活体检测方法第一实施例的流程示意图;
图2为本申请活体检测方法中所述人脸关键点的分布示意图;
图3为本申请活体检测方法中所述关键区域的分布示意图;
图4为本申请活体检测方法第二实施例的流程示意图;
图5为本申请实施例中活体检测方法涉及的硬件运行环境的设备结构示意图。
本申请目的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
本申请实施例提供一种活体检测方法,在本申请活体检测方法的第一实施例中,参照图1,所述活体检测方法包括:
步骤S10,对待检测人脸的各关键区域进行对焦拍摄,得到各所述关键区域对应的对焦人脸图像以及对应的目标对焦参数;
在本申请实施例中,需要说明的是,所述关键区域包括但不限定于待检测人脸的鼻尖区域、眼睛区域、眉毛区域、嘴唇区域以及两颊区域等,所述目标对焦参数为对焦拍摄时相机的焦距。
在本申请的一种可能的实施方式中,在保持相机其它参数不变的情况下,通过调整相 机焦距参数,对待检测人脸的各关键区域分别进行对焦拍摄,得到各所述关键区域对应的对焦拍摄人脸图像,以及拍摄各所述对焦拍摄人脸图像时的目标对焦参数作为目标对焦参数。需要说明的是,对焦拍摄的过程为不断调整相机焦距参数,使得对应的关键区域最清晰的拍摄过程,最后获得对焦拍摄人脸图像为对应的关键区域最清晰时拍摄的人脸图像,此时的相机焦距参数即为目标对焦参数。
其中,所述对待检测人脸的各关键区域进行对焦拍摄,得到各所述关键区域对应的对焦人脸图像以及对应的目标对焦参数的步骤包括:
步骤S11,通过调整相机焦距参数,分别对各所述关键区域进行对焦拍摄,得到各所述关键区域对应的初始对焦人脸图像以及对应的目标对焦参数;
在本申请的一种可能的实施方式中,调整相机焦距参数,检测对应的关键区域是否清晰。若清晰,则对所述关键区域进行拍摄,得到所述关键区域对应的初始对焦人脸图像,将相机拍摄所述初始对焦人脸图像时的相机焦距参数作为目标对焦参数;若不清晰,则返回执行步骤:调整相机焦距参数。
步骤S12,获取各所述初始对焦人脸图像对应的人脸关键点坐标,进而依据各所述人脸关键点坐标,对各所述初始对焦人脸图像进行图像对齐,得到各所述对焦人脸图像。
在本申请的一种可能的实施方式中,获取各所述初始对焦人脸图像在同一人脸关键点的坐标,得到各所述人脸关键点坐标。在各所述人脸关键点坐标中选取一坐标作为基准坐标,依据其它各人脸关键点坐标与所述基准坐标之间的差值,将其它各人脸关键点坐标对应的初始对焦人脸图像中像素点坐标与所述基准坐标对应的初始对焦人脸图像中像素点坐标对齐,将对齐后的其它各初始对焦人脸图像和所述初始对焦人脸图像均作为所述对焦人脸图像。
其中,在所述对待检测人脸的各关键区域进行对焦拍摄,得到各所述关键区域对应的对焦人脸图像以及对应的目标对焦参数的步骤之前,所述活体检测方法的步骤还包括:
步骤A10,对待检测人脸进行人脸关键点检测,得到人脸关键点信息;
在本申请的一种可能的实施方式中,对待检测人脸进行全局对焦拍摄,得到全局对焦拍摄图像。对所述全局对焦拍摄图像进行人脸检测,若人脸检测通过,则对所述全局对焦拍摄图像进行人脸关键点检测,得到人脸关键点信息,若人脸检测未通过,则判定所述待检测人脸不为目标人脸,输出人脸识别未通过的提示信息。
步骤A20,依据所述人脸关键点信息,为所述待检测人脸划分对焦候选区域,得到各所述关键区域。
在本申请实施例中,需要说明的是,所述人脸关键点信息包括人脸关键点坐标。
在本申请的一种可能的实施方式中,依据各人脸关键点的人脸关键点坐标,为所述待检测人脸划分对焦候选区域,得到各所述关键区域,其中,图2为所述人脸关键点的分布示意图,图3所示为所述关键区域的分布示意图,其中,点1至68均为人脸关键点,各框选区域均为所述关键区域。
步骤S20,获取各所述对焦人脸图像对应的清晰度图,并将各所述清晰度图和各所述目标对焦参数融合得到所述待检测人脸对应的目标人脸深度图;
在本实施例中,需要说明的是,所述清晰度图为由像素点对应的灰度值构成的像素值矩阵,该灰度值用于表示对应的像素点的清晰度,灰度值越大,则像素点的清晰度越高。
在本申请的一种可能的实施方式中,计算各所述对焦人脸图像中像素点对应的灰度值,得到各所述对焦人脸图像对应的清晰度图,并依据各所述清晰度图在相同像素点位置的灰度值的大小,对各所述清晰度图对应的目标对焦参数进行加权融合,得到每一相同像素点位置上的人脸深度特征值,进而依据各所述相同像素点位置的位置排列规律,将各所述人脸深度特征值组合为矩阵,得到目标人脸深度图。
其中,所述获取各所述对焦人脸图像对应的清晰度图的步骤包括:
步骤S21,计算所述对焦人脸图像对应的二阶梯度图;
步骤S22,对所述二阶梯度图进行高斯滤波,得到所述对焦人脸图像对应的清晰度图。
在本申请的一种可能的实施方式中,计算所述对焦人脸图像中各像素点的二阶梯度值,得到二阶梯度图,对所述二阶梯度图进行高斯滤波,得到所述对焦人脸图像对应的清晰度图,其中,在一实施例中,图像的二阶梯度值可依据拉普拉斯算子进行计算。
步骤S30,依据所述目标人脸深度图,对所述待检测人脸进行活体检测,得到活体检测结果。
在本申请的一种可能的实施方式中,依据预设特征提取模型,对所述目标人脸深度图进行特征提取,得到输出人脸深度特征。计算所述输出人脸深度特征与目标人脸深度特征之间的特征相似度,依据所述特征相似度,对所述待检测人脸进行活体检测,得到活体检测结果,其中,所述目标人脸深度特征为对真实人脸深度信息进行特征提取得到的人脸深度特征。
在本申请的一种可能的实施方式中,所述依据所述特征相似度,对所述待检测人脸进行活体检测,得到活体检测结果的步骤包括:
若所述特征相似度大于预设特征相似度阈值,则判定所述待检测人脸为活体人脸,所 述活体检测结果为活体检测通过;若所述特征相似度不大于预设特征相似度阈值,则判定所述待检测人脸不为活体人脸,所述活体检测结果为活体检测不通过。
其中,所述依据所述目标人脸深度图,对所述待检测人脸进行活体检测,得到活体检测结果的步骤包括:
步骤S31,依据预设图像分类模型,对所述目标人脸深度图进行分类,得到图像分类结果;
步骤S32,依据所述图像分类结果,判断所述待检测人脸是否为活体人脸,得到所述活体检测结果。
在本实施例中,需要说明的是,所述预设图像分类模型可以为二分类模型,也可以多分类模型。
在本申请的一种可能的实施方式中,依据预设图像分类模型,对所述目标人脸深度图进行二分类,得到二分类标签。若所述二分类标签为预设目标二分类标签,则判定所述待检测人脸为活体人脸,所述活体检测结果为活体检测通过;若所述二分类标签不为预测目标二分类标签,则判定所述待检测人脸不为活体人脸,所述活体检测结果为活体检测不通过。
本申请实施例提供了一种活体检测方法,相比于现有技术采用的依据神经网络模型预估人脸图像的深度信息,进而根据深度信息分别进行对焦拍摄获取人脸不同区域的对焦图像,进一步利用对焦图像的图像质量判断对象是否为活体的技术手段,本申请实施例首先对待检测人脸的各关键区域进行对焦拍摄,得到各所述关键区域对应的对焦人脸图像以及对应的目标对焦参数,获取各所述对焦人脸图像对应的清晰度图,并将各所述清晰度图和各所述目标对焦参数融合得到所述待检测人脸对应的目标人脸深度图,实现了依据人脸图像不同关键区域的目标对焦参数的分布情况,以及各对焦人脸图像的清晰度的分布情况,直接计算待检测人脸的深度信息的目的,其中,目标对焦参数的分布情况以及各对焦人脸图像的清晰度的分布情况均在一定程度上反应了人脸的三维特征信息,进而实现了依据人脸的三维特征信息计算人脸深度图的目的,相比于依据人脸二维特征信息预估人脸三维特征信息的方式,提升了人脸深度信息预估的准确性。进而依据所述目标人脸深度图,对所述待检测人脸进行活体检测,得到活体检测结果,即可实现依据准确度更高的人脸深度信息进行活体检测的目的,克服了现有技术中通过神经网络模型预估人脸的深度信息的准确度不高,进而将影响活体检测的精度的技术缺陷,提升了活体检测的准确度。
进一步地,参照图4,在本申请另一实施例中,所述清晰度图至少包括一灰度值,所述目标人脸深度图至少包括一人脸深度特征值,所述将各所述清晰度图和各所述目标对焦参数融合得到所述待检测人脸对应的目标人脸深度图的步骤包括:
步骤B10,获取各所述清晰度图在同一目标位置上的灰度值;
步骤B20,依据所述目标位置上的各灰度值,将各所述清晰度图对应的目标对焦参数进行加权融合,得到所述目标人脸深度图中所述目标位置上的人脸深度特征值。
在本实施例中,需要说明的是,所述目标位置为像素点在清晰度图中的位置,也即为像素点位置。
在本申请的一种可能的实施方式中,获取各所述清晰度图在同一目标位置上的灰度值,进而对于同一目标位置上的各灰度值,计算每一灰度值在各所述灰度值中的占比,得到同一目标位置上的各灰度值对应的权重值,依据各所述权重值,对各所述目标对焦参数进行加权融合,得到所述目标位置上人脸深度特征值,各人脸深度特征值依据各目标位置的排列方式组成矩阵,即可得到目标人脸深度图。
其中,所述依据所述目标位置上的各灰度值,将各所述清晰度图对应的目标对焦参数进行加权融合,得到所述目标人脸深度图中所述目标位置上的人脸深度特征值的步骤包括:
步骤B21,依据所述目标位置上的各灰度值的大小,计算各所述灰度值对应的目标对焦参数的权重值;
在本申请的一种可能的实施方式中,将所述目标位置上的各灰度值输入预设指数函数,得到各灰度值对应的指数函数值,进而计算每一指数函数值与各所述指数函数值之和的比值,得到各所述灰度值对应的目标对焦参数的权重值。
在本申请的一种可能的实施方式中,所述权重值的计算方式如下:
Figure PCTCN2021138879-appb-000001
其中,W i(x,y)为第i个位置坐标为(x,y)的目标位置上的灰度值对应的权重值,p i(x,y)为第i个位置坐标为(x,y)的目标位置上的灰度值,n为位置坐标为(x,y)的目标位置上的灰度值的数量。
步骤B22,依据各所述权重值,对各所述目标对焦参数进行加权聚合,得到所述目标位置上的人脸深度特征值。
在本申请实施例中,需要说明的是,加权聚合的方式包括加权求和以及加权平均等。
在本申请的一种可能的实施方式中,加权聚合各所述目标对焦参数得到人脸深度特征值的计算方式如下:
Figure PCTCN2021138879-appb-000002
其中,p(x,y)为位置坐标为(x,y)的目标位置上的人脸深度特征值,W i(x,y)为第i个位置坐标为(x,y)的目标位置上的灰度值对应的权重值,f i为第i个位置坐标为(x,y)的目标位置上的灰度值对应的目标对焦参数。
需要说明的是,由于人脸不同部位的深度不同,进而需要对不同深度的区域使用不同的焦距(目标对焦参数)进行对焦拍摄,所以,各目标对焦参数可反应人脸深度信息,也即,各目标对焦参数的分布中蕴含人脸深度信息。但是,由于人脸中存在深度变化比较剧烈的区域,若直接依据各目标对焦参数获取人脸深度,其准确度较低。而本申请实施例中依据同一像素点位置上多张不同区域的对焦人脸图像的灰度值,将各目标对焦参数进行加权融合,进而即使某一对焦人脸图像在深度变化比较剧烈的区域所对应的目标对焦参数并不最优的对焦参数,也不会对人脸深度图的计算产生过大的影响,使得人脸深度图的计算更加稳定,准确度更高。
本申请提供了一种目标人脸深度图的计算方法,也即,首先获取各所述清晰度图在同一目标位置上的灰度值,依据所述目标位置上的各灰度值,将各所述清晰度图对应的目标对焦参数进行加权融合,得到所述目标人脸深度图中所述目标位置上的人脸深度特征值,也即,依据同一像素点位置在各对焦人脸图像中的灰度值,将各目标对焦参数分别融合为各像素点位置对应的人脸深度特征值,进而得到目标人脸深度,而非直接依据各目标对焦参数的大小分布直接预估人脸深度,进而即使某一对焦人脸图像在深度变化比较剧烈的区域所对应的目标对焦参数并不最优的对焦参数,也不会对人脸深度图的计算产生过大的影响,使得人脸深度图的计算更加稳定,准确度更高,进而依据准确度更高且更稳定的人脸深度进行活体检测的准确度更高且更稳定。
参照图5,图5是本申请实施例方案涉及的硬件运行环境的设备结构示意图。
如图5所示,该活体检测设备可以包括:处理器1001,例如CPU,存储器1005,通信总线1002。其中,通信总线1002用于实现处理器1001和存储器1005之间的连接通信。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储设备。
在一实施例中,该活体检测设备还可以包括矩形用户接口、网络接口、摄像头、RF(Radio Frequency,射频)电路,传感器、音频电路、WiFi模块等等。矩形用户接口可以包括显示屏(Display)、输入子模块比如键盘(Keyboard),可选矩形用户接口还可以包括标准的有线接口、无线接口。网络接口可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。
本领域技术人员可以理解,图5中示出的活体检测设备结构并不构成对活体检测设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图5所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块以及活体检测程序。操作系统是管理和控制活体检测设备硬件和软件资源的程序,支持活体检测程序以及其它软件和/或,程序的运行。网络通信模块用于实现存储器1005内部各组件之间的通信,以及与活体检测系统中其它硬件和软件之间通信。
在图5所示的活体检测设备中,处理器1001用于执行存储器1005中存储的活体检测程序,实现上述任一项所述的活体检测方法的步骤。
本申请活体检测设备具体实施方式与上述活体检测方法各实施例基本相同,在此不再赘述。
本申请实施例还提供一种活体检测装置,所述活体检测装置应用于活体检测设备,所述活体检测装置包括:
对焦拍摄模块,用于对待检测人脸的各关键区域进行对焦拍摄,得到各所述关键区域对应的对焦人脸图像以及对应的目标对焦参数;
融合模块,用于获取各所述对焦人脸图像对应的清晰度图,并将各所述清晰度图和各所述目标对焦参数融合得到所述待检测人脸对应的目标人脸深度图;
活体检测模块,用于依据所述目标人脸深度图,对所述待检测人脸进行活体检测,得到活体检测结果。
在一实施例中,所述清晰度图至少包括一灰度值,所述目标人脸深度图至少包括一人脸深度特征值,所述融合模块还用于:
获取各所述清晰度图在同一目标位置上的灰度值;
依据所述目标位置上的各灰度值,将各所述清晰度图对应的目标对焦参数进行加权融合,得到所述目标人脸深度图中所述目标位置上的人脸深度特征值。
在一实施例中,所述融合模块还用于:
依据所述目标位置上的各灰度值的大小,计算各所述灰度值对应的目标对焦参数的权重值;
依据各所述权重值,对各所述目标对焦参数进行加权聚合,得到所述目标位置上的人脸深度特征值。
在一实施例中,所述对焦拍摄模块还用于:
通过调整相机焦距参数,分别对各所述关键区域进行对焦拍摄,得到各所述关键区域对应的初始对焦人脸图像以及对应的目标对焦参数;
获取各所述初始对焦人脸图像对应的人脸关键点坐标,进而依据各所述人脸关键点坐标,对各所述初始对焦人脸图像进行图像对齐,得到各所述对焦人脸图像。
在一实施例中,所述活体检测装置还用于:
对待检测人脸进行人脸关键点检测,得到人脸关键点信息;
依据所述人脸关键点信息,为所述待检测人脸划分对焦候选区域,得到各所述关键区域。
在一实施例中,所述活体检测模块还用于:
依据预设图像分类模型,对所述目标人脸深度图进行分类,得到图像分类结果;
依据所述图像分类结果,判断所述待检测人脸是否为活体人脸,得到所述活体检测结果。
在一实施例中,所述融合模块还用于:
计算所述对焦人脸图像对应的二阶梯度图;
对所述二阶梯度图进行高斯滤波,得到所述对焦人脸图像对应的清晰度图。
本申请活体检测装置的具体实施方式与上述活体检测方法各实施例基本相同,在此不再赘述。
本申请实施例提供了一种可读存储介质,且所述可读存储介质存储有一个或者一个以上程序,所述一个或者一个以上程序还可被一个或者一个以上的处理器执行以用于实现上述任一项所述的活体检测方法的步骤。
本申请可读存储介质具体实施方式与上述活体检测方法各实施例基本相同,在此不再赘述。
本申请实施例提供了一种计算机程序产品,且所述计算机程序产品包括有一个或者一个以上计算机程序,所述一个或者一个以上计算机程序还可被一个或者一个以上的处理器 执行以用于实现上述任一项所述的活体检测方法的步骤。
本申请计算机程序产品具体实施方式与上述活体检测方法各实施例基本相同,在此不再赘述。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其它相关的技术领域,均同理包括在本申请的专利处理范围内。

Claims (10)

  1. 一种活体检测方法,其中,所述活体检测方法包括:
    对待检测人脸的各关键区域进行对焦拍摄,得到各所述关键区域对应的对焦人脸图像以及对应的目标对焦参数;
    获取各所述对焦人脸图像对应的清晰度图,并将各所述清晰度图和各所述目标对焦参数融合得到所述待检测人脸对应的目标人脸深度图;
    依据所述目标人脸深度图,对所述待检测人脸进行活体检测,得到活体检测结果。
  2. 如权利要求1所述活体检测方法,其中,所述清晰度图至少包括一灰度值,所述目标人脸深度图至少包括一人脸深度特征值,所述将各所述清晰度图和各所述目标对焦参数融合得到所述待检测人脸对应的目标人脸深度图的步骤包括:
    获取各所述清晰度图在同一目标位置上的灰度值;
    依据所述目标位置上的各灰度值,将各所述清晰度图对应的目标对焦参数进行加权融合,得到所述目标人脸深度图中所述目标位置上的人脸深度特征值。
  3. 如权利要求2所述活体检测方法,其中,所述依据所述目标位置上的各灰度值,将各所述清晰度图对应的目标对焦参数进行加权融合,得到所述目标人脸深度图中所述目标位置上的人脸深度特征值的步骤包括:
    依据所述目标位置上的各灰度值的大小,计算各所述灰度值对应的目标对焦参数的权重值;
    依据各所述权重值,对各所述目标对焦参数进行加权聚合,得到所述目标位置上的人脸深度特征值。
  4. 如权利要求1所述活体检测方法,其中,所述对待检测人脸的各关键区域进行对焦拍摄,得到各所述关键区域对应的对焦人脸图像以及对应的目标对焦参数的步骤包括:
    通过调整相机焦距参数,分别对各所述关键区域进行对焦拍摄,得到各所述关键区域对应的初始对焦人脸图像以及对应的目标对焦参数;
    获取各所述初始对焦人脸图像对应的人脸关键点坐标,进而依据各所述人脸关键点坐标,对各所述初始对焦人脸图像进行图像对齐,得到各所述对焦人脸图像。
  5. 如权利要求1所述活体检测方法,其中,在所述对待检测人脸的各关键区域进行对焦拍摄,得到各所述关键区域对应的对焦人脸图像以及对应的目标对焦参数的步骤之前,所述活体检测方法的步骤还包括:
    对待检测人脸进行人脸关键点检测,得到人脸关键点信息;
    依据所述人脸关键点信息,为所述待检测人脸划分对焦候选区域,得到各所述关键区域。
  6. 如权利要求1所述活体检测方法,其中,所述依据所述目标人脸深度图,对所述待检测人脸进行活体检测,得到活体检测结果的步骤包括:
    依据预设图像分类模型,对所述目标人脸深度图进行分类,得到图像分类结果;
    依据所述图像分类结果,判断所述待检测人脸是否为活体人脸,得到所述活体检测结果。
  7. 如权利要求1所述活体检测方法,其中,所述获取各所述对焦人脸图像对应的清晰度图的步骤包括:
    计算所述对焦人脸图像对应的二阶梯度图;
    对所述二阶梯度图进行高斯滤波,得到所述对焦人脸图像对应的清晰度图。
  8. 一种活体检测设备,其中,所述活体检测设备包括:存储器、处理器以及存储在存储器上的用于实现所述活体检测方法的程序,
    所述存储器用于存储实现活体检测方法的程序;
    所述处理器用于执行实现所述活体检测方法的程序,以实现如权利要求1至7中任一项所述活体检测方法的步骤。
  9. 一种可读存储介质,其中,所述可读存储介质上存储有实现活体检测方法的程序,所述实现活体检测方法的程序被处理器执行以实现如权利要求1至7中任一项所述活体检测方法的步骤。
  10. 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述活体检测方法的步骤。
PCT/CN2021/138879 2021-10-13 2021-12-16 活体检测方法、设备、可读存储介质及计算机程序产品 WO2023060756A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111194548.0 2021-10-13
CN202111194548.0A CN113903084A (zh) 2021-10-13 2021-10-13 活体检测方法、设备、可读存储介质及计算机程序产品

Publications (1)

Publication Number Publication Date
WO2023060756A1 true WO2023060756A1 (zh) 2023-04-20

Family

ID=79191945

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/138879 WO2023060756A1 (zh) 2021-10-13 2021-12-16 活体检测方法、设备、可读存储介质及计算机程序产品

Country Status (2)

Country Link
CN (1) CN113903084A (zh)
WO (1) WO2023060756A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335722A (zh) * 2015-10-30 2016-02-17 商汤集团有限公司 一种基于深度图像信息的检测系统及方法
CN105872363A (zh) * 2016-03-28 2016-08-17 广东欧珀移动通信有限公司 人脸对焦清晰度的调整方法及调整装置
CN107491775A (zh) * 2017-10-13 2017-12-19 理光图像技术(上海)有限公司 人脸活体检测方法、装置、存储介质及设备
CN108171204A (zh) * 2018-01-17 2018-06-15 百度在线网络技术(北京)有限公司 检测方法和装置
CN109948439A (zh) * 2019-02-13 2019-06-28 平安科技(深圳)有限公司 一种活体检测方法、系统及终端设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335722A (zh) * 2015-10-30 2016-02-17 商汤集团有限公司 一种基于深度图像信息的检测系统及方法
CN105872363A (zh) * 2016-03-28 2016-08-17 广东欧珀移动通信有限公司 人脸对焦清晰度的调整方法及调整装置
CN107491775A (zh) * 2017-10-13 2017-12-19 理光图像技术(上海)有限公司 人脸活体检测方法、装置、存储介质及设备
CN108171204A (zh) * 2018-01-17 2018-06-15 百度在线网络技术(北京)有限公司 检测方法和装置
CN109948439A (zh) * 2019-02-13 2019-06-28 平安科技(深圳)有限公司 一种活体检测方法、系统及终端设备

Also Published As

Publication number Publication date
CN113903084A (zh) 2022-01-07

Similar Documents

Publication Publication Date Title
KR101333871B1 (ko) 멀티-카메라 교정을 위한 방법 및 장치
US9307221B1 (en) Settings of a digital camera for depth map refinement
US9619708B2 (en) Method of detecting a main subject in an image
CN109934065B (zh) 一种用于手势识别的方法和装置
JP5366756B2 (ja) 情報処理装置及び情報処理方法
AU2013237718A1 (en) Method, apparatus and system for selecting a frame
JP2009522591A (ja) 関心領域を追跡することによってビデオカメラの自動焦点を制御するための方法および装置
JP2017033469A (ja) 画像識別方法、画像識別装置及びプログラム
WO2013079098A1 (en) Dynamically configuring an image processing function
JP4706197B2 (ja) 対象決定装置及び撮像装置
JP6515039B2 (ja) 連続的な撮影画像に映り込む平面物体の法線ベクトルを算出するプログラム、装置及び方法
US20160093028A1 (en) Image processing method, image processing apparatus and electronic device
CN110516579B (zh) 手持眼底相机拍照方法和装置、设备及存储介质
CN112969023A (zh) 图像拍摄方法、设备、存储介质以及计算机程序产品
JP2013037539A (ja) 画像特徴量抽出装置およびそのプログラム
JP6758263B2 (ja) 物体検出装置、物体検出方法及び物体検出プログラム
JP6798609B2 (ja) 映像解析装置、映像解析方法およびプログラム
WO2023060756A1 (zh) 活体检测方法、设备、可读存储介质及计算机程序产品
CN116051736A (zh) 一种三维重建方法、装置、边缘设备和存储介质
RU2647645C1 (ru) Способ устранения швов при создании панорамных изображений из видеопотока кадров в режиме реального времени
JP4387889B2 (ja) テンプレート照合装置および方法
Hossain et al. A real-time face to camera distance measurement algorithm using object classification
CN109727193B (zh) 图像虚化方法、装置及电子设备
JP4812743B2 (ja) 顔認識装置、顔認識方法、顔認識プログラムおよびそのプログラムを記録した記録媒体
JP7341712B2 (ja) 画像処理装置、画像処理方法、撮像装置、およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21960481

Country of ref document: EP

Kind code of ref document: A1