WO2022111512A1 - Facial liveness detection method and apparatus, and device - Google Patents

Facial liveness detection method and apparatus, and device Download PDF

Info

Publication number
WO2022111512A1
WO2022111512A1 PCT/CN2021/132720 CN2021132720W WO2022111512A1 WO 2022111512 A1 WO2022111512 A1 WO 2022111512A1 CN 2021132720 W CN2021132720 W CN 2021132720W WO 2022111512 A1 WO2022111512 A1 WO 2022111512A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
eye
face
mouth
living body
Prior art date
Application number
PCT/CN2021/132720
Other languages
French (fr)
Chinese (zh)
Inventor
任志浩
郝治超
丁一
张宁权
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2022111512A1 publication Critical patent/WO2022111512A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • the present application relates to the technical field of biometrics, and in particular, to a method, device and device for detecting a face living body.
  • Liveness detection is not unfamiliar to people, and can be used to prevent criminals from maliciously using forged biometrics of others for identity authentication, such as using stolen photos, videos recorded online, and prosthetic masks made for identity authentication.
  • the detection is usually based on the overall characteristics of the face.
  • the detected user does not completely remove the mask, scarf, etc., or the face is partially occluded due to other factors, the detected The accuracy will decrease, so it is particularly important to perform live detection based on the local features of the face.
  • the purpose of the embodiments of the present application is to provide a method, device, and device for detecting a living body of a human face, so as to realize the detection of living body based on a local area of a human face.
  • an embodiment of the present application provides a face liveness detection method, including:
  • the living body detection process is performed according to the grayscale feature and the pre-trained living body detection model, and the face living body detection result of the object to be detected is obtained.
  • an embodiment of the present application provides a face liveness detection device, including:
  • an image sensor for acquiring an infrared image of the object to be detected
  • a processor configured to perform extraction processing on the infrared image of the partial region of the face, to obtain an image of the partial region of the target face; determine the grayscale feature of the image of the partial region of the target face; according to the grayscale feature and pre-training
  • the living body detection model is used for detection processing to obtain the face living body detection result of the object to be detected.
  • an embodiment of the present application provides an electronic device, including: a processor, a communication interface, a memory, and a communication bus; wherein the processor, the communication interface, and the memory communicate with each other through the bus; the memory is used to store A computer program; a processor for executing the program stored in the memory to realize the steps of the above-mentioned method for detecting a living body of a human face.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, implements the steps of the above-mentioned method for detecting a living body of a human face.
  • an embodiment of the present application provides a face liveness detection system, the system includes: a data processor and an image sensor that supports collecting infrared images, wherein:
  • the image sensor is used to collect the infrared image of the object to be detected, and send the collected infrared image to the data processor;
  • the data processor is configured to receive the infrared image collected by the image sensor, perform extraction processing of the partial area of the face on the infrared image, and obtain an image of the partial area of the target face; determine the image of the partial area of the target face.
  • Grayscale feature perform liveness detection processing according to the grayscale feature and the pre-trained liveness detection model, and obtain the face liveness detection result of the object to be detected.
  • an infrared image of the object to be detected is extracted and processed to obtain a partial face region image of the target face; and based on the grayscale features of the target face partial region image and a pre-trained living body detection model A living body detection process is performed to determine whether the object to be detected is a living body of a human face.
  • FIG. 1 is a schematic flowchart of a first method for detecting a human face liveness provided in an embodiment of the present application
  • FIG. 2 is a schematic diagram of the eye region of a living body and a prosthesis in an infrared image provided by an embodiment of the present application;
  • FIG. 3 is a schematic diagram of the mouth region of a living body and a prosthesis in an infrared image provided by an embodiment of the present application;
  • FIG. 4 is a schematic flowchart of a second method for detecting a human face liveness provided by an embodiment of the present application
  • FIG. 5 is a schematic flowchart of a third face liveness detection method according to an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of a fourth method for detecting a living body of a human face provided by an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a fifth method for detecting liveness of a human face provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a sixth method for detecting liveness of a human face provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a seventh method for detecting a living body of a human face provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of an eighth method for detecting liveness of a human face provided by an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of a ninth face liveness detection method provided by an embodiment of the present application.
  • FIG. 12 is a schematic flowchart of a tenth face liveness detection method provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of the module composition of a face liveness detection apparatus provided by an embodiment of the application.
  • FIG. 14 is a schematic diagram of the composition of an electronic device according to an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a first face liveness detection method provided by an embodiment of the present application. Referring to FIG. 1 , the method may specifically include the following steps:
  • Step 102 acquiring an infrared image of the object to be detected.
  • the face liveness detection method provided by the embodiments of the present application can be applied to a face liveness detection apparatus, and the face liveness detection apparatus includes an image sensor, such as a 2-megapixel image sensor.
  • the face liveness detection device can be applied to various systems that require liveness detection, such as access control systems, attendance systems, and security systems. When it is necessary to determine whether the object to be detected is a living body, the face detection device collects an infrared image of the object to be detected.
  • the infrared light band is used in the embodiment of the present application.
  • the infrared image below is used for face liveness detection.
  • the acquisition band of the infrared image can be the infrared light band between 850nm-1200nm.
  • Step 104 extracting the partial region of the face on the acquired infrared image to obtain an image of the partial region of the target face.
  • the above-mentioned partial area image of the target face may be: an image of an eye area, an image of a nose area, an image of a mouth area, an image of a forehead area, and the like.
  • the embodiments of the present application take the living body, a paper photo prosthesis, and a silicone prosthesis as examples to illustrate.
  • a is the eye area image in the infrared image of the living body
  • b is the eye area image in the infrared image of the paper photo prosthesis
  • c is the eye area in the infrared image of the silicone prosthesis.
  • the eye has only two parts, black and white, and the pupil and iris cannot be distinguished, and the gray distribution of the eye is quite different from that of the living eye.
  • silicone prostheses although the pupil, iris and sclera can be distinguished, the gray distribution of the eye is significantly different from that of the living eye. It can be seen that there are obvious differences between the ocular regions of the living body and the prosthesis in the infrared images.
  • the living body and the silicone prosthesis are used as examples for illustration, see Figure 3, where e is the infrared image of the living body. Mouth area image, f is the mouth area in the infrared image of the silicone prosthesis.
  • e is the infrared image of the living body.
  • Mouth area image f is the mouth area in the infrared image of the silicone prosthesis.
  • the grayscale of the living mouth and the surrounding face can be clearly divided, the mouth is darker, and the surrounding face is brighter; while the silicone prosthesis is the opposite, the mouth is much brighter than the surrounding face, although this It is related to the reflective properties of silicone materials and coatings, but repeated experiments on different materials and silicone prostheses with different reflective properties have proved that the difference in grayscale is sufficient for live detection.
  • the obtained infrared image is subjected to extraction processing of the eye region image and the mouth region image, and the obtained eye region image and mouth region image are determined as the target face local region image.
  • Step 106 Determine the grayscale feature of the partial region image of the target face.
  • a grayscale histogram of the image of the partial region of the target face can be drawn, and the grayscale feature of the image of the partial region of the target face can be determined based on the drawn grayscale histogram.
  • the above-mentioned drawing of the grayscale histogram of the target face local area image refers to the statistics of the grayscale histogram information of the target face local area image, that is, in the actual application process, it is not necessary to actually To draw the above-mentioned histogram, it is only necessary to obtain the statistical information of the gray-scale histogram of the above-mentioned partial region image of the target face. In this way, when the gray level feature is subsequently determined, the gray level feature of the target face partial region image can be determined based on the above-stated statistical information of the gray level histogram of the target face partial region image.
  • a preset feature extraction algorithm can also be used to extract the grayscale features of the partial region image of the target face, wherein the feature extraction algorithm can be SIFT (Scale-invariant feature transform, the scale is invariant Feature transformation) algorithm, LBP (Local Binary Pattern, local binary pattern) algorithm, etc.
  • SIFT Scale-invariant feature transform, the scale is invariant Feature transformation
  • LBP Local Binary Pattern, local binary pattern
  • Step 108 Perform living body detection processing according to the determined grayscale feature and the pre-trained living body detection model to obtain a face living body detection result of the object to be detected.
  • the determined grayscale feature is input into a pre-trained living body detection model for living body detection processing, and a face living body detection result of the object to be detected is obtained.
  • the infrared image of the object to be detected is subjected to the extraction processing of the partial region of the face to obtain the image of the partial region of the target face; Liveness detection processing to determine whether the object to be detected is a face living body.
  • Liveness detection processing to determine whether the object to be detected is a face living body.
  • step S104 may include the following steps 104-2 to 104-8:
  • Step 104-2 Perform key point detection processing on the infrared image according to a preset method to obtain eye key points and mouth key points.
  • the eye key points include left eye key points and right eye key points
  • the mouth key points include left mouth corner key points and right mouth corner key points.
  • ASM Active Shape Model, active shape model
  • CPR Cascaded Pose Regression, cascade pose regression
  • Step 104-4 extract the eye region image according to the obtained eye key points.
  • step 104-4 may include the following steps 104-42 to 104-44:
  • Steps 104-42 determine the first distance between the left eye key point and the right eye key point, respectively take the left eye key point and the right eye key point as the center, and demarcate the left eye according to the first distance and the first preset ratio area and the right eye area.
  • the first coordinate information of the left eye key point and the second coordinate information of the right eye key point can be determined; the first coordinate information between the left eye key point and the right eye key point can be determined according to the first coordinate information and the second coordinate information distance; calculate the length and width of the rectangular area to be demarcated according to the first distance and the first preset ratio, take the key point of the left eye as the center, and delineate the left eye area of the rectangle according to the calculated length and width; and use the right eye
  • the key point is the center, and the right eye area of the rectangle is delineated according to the calculated length and width.
  • the coordinate information of each key point may be: the pixel coordinates of the key point in the image, or may be the coordinates of the key point in a preset image coordinate system.
  • the first distance can be considered as the interpupillary distance between the left eye and the right eye; the first preset ratio includes the first preset width ratio and the first preset length ratio; in practical applications, the above-mentioned No. A preset width ratio and a first preset length ratio.
  • the first preset width ratio may be 0.16-0.18 times the IPD
  • the first preset length ratio may be 0.4-0.45 times the IPD.
  • the first preset length ratio can be understood as: the ratio of the length of the rectangular area where the eyes are located to the distance between the left and right eyes
  • the first preset width ratio can be understood as: the width of the rectangular area where the eyes are located relative to the distance between the left and right eyes. Proportion.
  • Steps 104-44 Determine the images corresponding to the demarcated left eye area and right eye area as eye area images.
  • the image corresponding to the demarcated left-eye area may be determined as the left-eye area image
  • the image corresponding to the demarcated right-eye area may be determined as the right-eye area image
  • the left-eye area image and the right-eye area image may be determined as the right-eye area image. Determined as the eye area image.
  • step 104-44 may include the following steps 104-442 to 104-446:
  • Step 104-442 perform edge detection processing on the images corresponding to the left eye area and the right eye area, and perform circular fitting processing based on the processing result of the edge detection processing; if the circular fitting processing fails, perform steps 104-444 ; If the circle fitting process is successful, execute steps 104-446.
  • edge detection processing can be performed on the images corresponding to the left eye region and the right eye region based on edge detection operators such as Laplacian operator and Gaussian operator, so as to obtain the edge in the above image, which is used as the processing of edge detection processing As a result, a circular fit is performed based on the detected edges. Since the process of edge detection processing is in the prior art, it will not be repeated in this application.
  • the above-mentioned circular fitting based on the detected edge can be understood as performing curve fitting on the detected edge and judging that it can be fitted as a circle. If the above-mentioned edge can be successfully fitted as a circle, it is considered that a circle is The fitting process is successful; if it is difficult to fit the above edge into a circle, it is considered that the circle fitting process fails.
  • Steps 104-444 it is determined that there is no living body, and the face living body detection result of the object to be detected is that the detection fails.
  • the circle fitting fails, it means that it is difficult to distinguish the pupil, iris and sclera of the eye in the infrared image, which further means that the above infrared image may be the infrared image of the prosthesis, so it can be determined that there is no living body.
  • the living body detection result is: the detection fails.
  • Steps 104-446 Determine the images corresponding to the demarcated left-eye area and right-eye area as eye area images.
  • the circle fitting is successful, it means that the pupil, the iris and the sclera of the eye can be distinguished in the infrared image, and subsequent living detection can be further performed.
  • the eye region image can be extracted by detecting the eye key points and extracting the eye region image based on the eye key point, so as to ensure the extraction accuracy of the eye region image.
  • the prosthesis in the form of paper photos can be identified in time, and if the circle fitting fails, it can be directly judged that there is no living body, and there is no need to perform Subsequent related processing can improve the detection efficiency.
  • Step 104-6 extract the mouth region image according to the obtained mouth key points.
  • step 104-6 may include the following steps 104-62 to 104-66:
  • Steps 104-62 Determine the reference point of the mouth according to the key point of the left corner of the mouth and the key point of the right corner of the mouth.
  • the third coordinate information of the key point of the left corner of the mouth and the fourth coordinate information of the key point of the right corner of the mouth can be determined, and the reference point of the mouth is determined according to the third coordinate information and the fourth coordinate information.
  • the average value of the third coordinate information and the fourth coordinate information is calculated, and the point corresponding to the calculated average value is determined as the reference point of the mouth.
  • an adjustment coefficient for correcting the tilted state may be set, and when the above average value is obtained, the average value is compared with the set adjustment coefficient Multiply to obtain the reference point of the mouth of the object to be detected after correction.
  • Steps 104-64 Determine the second distance between the key point of the left corner of the mouth and the key point of the right corner of the mouth, take the reference point as the center, and define the mouth area according to the second distance and the second preset ratio.
  • the second distance between the key point of the left corner of the mouth and the key point of the right corner of the mouth is calculated; according to the second distance and The second preset ratio calculates the length and width of the rectangular area to be demarcated, takes the reference point of the mouth as the center, and demarcates the rectangular mouth area according to the calculated length and width.
  • the second preset ratio may include a width ratio, the second distance is determined as the length of the rectangular area to be demarcated, the length of the second distance is denoted as d, and the second preset ratio may be 0.32-0.36 of d times.
  • the above-mentioned second preset ratio can be understood as: the ratio of the width of the rectangular region where the mouth is located to the distance between the key points of the corners of the mouth on the left and right sides.
  • the above-mentioned first preset ratio and second preset ratio can be set by themselves according to actual needs. And when the detected key points are not just a left eye key point, a right eye key point, a left mouth corner key point and a right mouth corner key point, but many key points, it can also be delineated based on the detected key points. Eye area and mouth area. For example, taking the left-eye area as an example, in the case where there are multiple left-eye key points, the preset graphic area including the above-mentioned multiple left-eye key points can be used as the left-eye area, and the above-mentioned preset graphic area can be a circle area, rectangular area, oval area, etc.
  • Steps 104-66 determine the image corresponding to the demarcated mouth region as the mouth region image.
  • steps 104-66 may include the following steps 104-662 to 104-666:
  • Step 104-662 Calculate the grayscale mean value of the image corresponding to the planned mouth region, and determine whether the statistical grayscale mean value meets the preset condition; if not, execute step 104-664; 666.
  • Steps 104-664 it is determined that there is no living body, and the face living body detection result of the object to be detected is that the detection fails.
  • the above-mentioned mouth area may be the mouth area of a prosthesis such as silicone, so it can be determined that there is no living body, and accordingly, it is determined that the living body detection result is: pass.
  • Steps 104-666 determining the image corresponding to the demarcated mouth region as the mouth region image.
  • the subsequent living body detection can be further performed.
  • the key points of the mouth by detecting the key points of the mouth and extracting the image of the mouth region based on the key points of the mouth, the accuracy of the extracted image of the mouth region is ensured.
  • the prosthesis such as silicone can be detected in time. In this case, subsequent detection processing is not required, and the detection efficiency can be improved.
  • Step 104-8 Determine the extracted eye region image and mouth region image as the target face partial region image.
  • step 106 includes the following steps 106-2 to 106-6:
  • Step 106-2 performing normalization processing on the partial region image of the target face to obtain an image of a preset size.
  • the image of the left eye area, the image of the right eye area, and the image of the mouth area are respectively normalized to obtain an image of a preset size.
  • the size of the normalized image can be set by itself in practical applications.
  • Step 106-4 drawing a grayscale histogram of an image of a preset size.
  • the gray value of each pixel can be determined, and the number of pixels corresponding to each gray value can be counted.
  • the number of corresponding pixel points is the ordinate, and the gray histogram is drawn according to the determined gray value and the number of statistics.
  • the above-mentioned drawing of the grayscale histogram of the image of the preset size refers to the statistics of the grayscale histogram information of the image of the preset size, that is, in the actual application process, it is not necessary to actually The histogram is drawn, but only the statistical information of the grayscale histogram of the image of the preset size needs to be obtained by statistics. In this way, when the gray level feature is subsequently determined, the gray level feature of the image in the partial region of the target face can be determined based on the above-stated statistical information of the gray level histogram of the image of the preset size.
  • Step 106-6 Determine a grayscale vector according to the grayscale histogram, and determine the grayscale vector as the grayscale feature of the target face local area image.
  • each one-dimensional grayscale vector is determined, and three one-dimensional grayscale vectors are obtained. Elements of different items in the one-dimensional vector correspond to different grayscale values, and each element in the above one-dimensional vector represents the number of pixels with corresponding grayscale values; the determined three one-dimensional grayscale vectors are determined as the target person Grayscale features of face local area images.
  • the liveness detection process can be performed based on the pretrained liveness detection model according to the grayscale feature.
  • the living body detection model includes a first living body detection model and a second living body detection model, wherein the first living body detection model is used to perform live body detection on the eye region, and the second live body detection model is used for the mouth region. Liveness testing in the area.
  • step 108 may include the following steps 108-2 to 108-6:
  • Step 108-2 Input the grayscale feature of the eye region image into the first living body detection model for detection processing to obtain a first detection result.
  • the grayscale vector of the left-eye region image and the grayscale vector of the right-eye region image are input into the first living body detection model for detection processing to obtain the first detection result.
  • Step 108-4 Input the grayscale feature of the mouth region image into the second living body detection model for detection processing, and obtain a second detection result.
  • the grayscale vector of the mouth region image is input into the second living body detection model for detection processing, and the second detection result is obtained.
  • Step 108-6 if both the first detection result and the second detection result indicate the existence of a living body, it is determined that the detection result of the living body of the face of the object to be detected is the detection passed.
  • the living body detection model may include a third detection model, where the third detection model is used to simultaneously perform live body detection on the eye region and the mouth region.
  • step 108 may include the following steps 108-8 and 108-10:
  • Step 108-8 combine the grayscale features of the eye region image and the grayscale features of the mouth region to obtain combined grayscale features.
  • the merging method can be set by itself in practical applications.
  • the gray vector of the eye area image and the gray vector of the mouth area image are spliced to obtain a combined gray vector, and the combined gray vector Determined to merge grayscale features.
  • step 108-10 the combined grayscale feature is input into the third living body detection model for detection processing, and a face living body detection result of the object to be detected is obtained.
  • the merged grayscale vector is input into the third living body detection model for detection processing, and a face living body detection result of the object to be detected is obtained.
  • steps 100-2 to 100-8 may be further included:
  • Step 100-2 acquiring infrared images of multiple samples, wherein the samples include positive samples and negative samples, the positive samples are living objects, and the negative samples are prosthetic objects.
  • Infrared images of multiple samples can be collected through an infrared image collection device, or infrared images of multiple samples can be obtained from a network.
  • the manner of acquiring the infrared images of the multiple samples is not specifically limited in this application.
  • Step 100-4 extracting the partial region of the face on the infrared image of each sample to obtain the partial region image of the face to be trained.
  • the process of extracting the partial region image of the face is the same as the process of extracting the image of the partial region of the target face, and reference may be made to the above-mentioned related description, and the repeated parts will not be repeated here.
  • Step 100-6 Determine the grayscale feature of the partial region image of the face to be trained.
  • the manner of determining the grayscale feature is the same as the foregoing manner of determining the grayscale feature of the partial region image of the target face, and reference may be made to the foregoing related descriptions, and repeated details will not be repeated here.
  • Step 100-8 perform training processing based on the grayscale features of the partial region images of the face to be trained to obtain a living body detection model.
  • grayscale features with labels are divided into training sets and test sets, and the SVM (Support Vector Machine, Support Vector Machine) algorithm is used to train the living body detection model based on the training set, and the initial living body detection model is obtained. Based on the test set, the obtained initial live detection model is tested and processed to obtain a test result.
  • SVM Small Vector Machine, Support Vector Machine
  • the initial living body detection model is determined as the final living body detection model; if it is determined that the test result does not meet the preset conditions, the training process of the living body detection model is performed again based on the training set until the test result is reached. In accordance with the preset conditions, the final living detection model is obtained.
  • the preset condition may be: the accuracy rate of the test result is greater than the preset accuracy rate, and the like. Since the training process of the model is a technical means well known to those skilled in the art, the training process of the model will not be described in detail in this application.
  • the training of the first living body detection model is performed according to the grayscale features of the eye region images to be trained, and the The grayscale features of the mouth region image are used to train the second in vivo detection model.
  • the training method of the living body detection model is also not limited to the above-mentioned SVM training, and may also be trained based on a neural network.
  • the training method of the living body detection model can be set according to the needs in practical applications.
  • the infrared image of the object to be detected is subjected to the extraction processing of the partial region of the face to obtain the image of the partial region of the face of the target; Liveness detection processing to determine whether the object to be detected is a face living body. Therefore, accurate and effective living body detection is realized based on the local features of the face, and the problems such as the decrease in the accuracy of the detection result caused by the occlusion of a part of the face area are effectively solved.
  • FIG. 13 also provides a schematic diagram of the module composition of a face liveness detection device in one or more embodiments of the present application, such as As shown in Figure 13, the device includes:
  • An image sensor 201 used for acquiring an infrared image of an object to be detected
  • the processor 202 is configured to perform extraction processing of the partial area of the face on the infrared image to obtain an image of the partial area of the target face; determine the grayscale feature of the image of the partial area of the target face;
  • the trained living body detection model is subjected to detection processing to obtain a face living body detection result of the object to be detected.
  • the face living body detection device obtained by the embodiment of the present application obtains the target face local area image by extracting the face local area of the infrared image of the object to be detected;
  • the trained live detection model performs live detection processing to determine whether the object to be detected is a live human face.
  • the processor 202 extracts an eye region image according to the eye key points.
  • the eye region image and the mouth region image are determined as target face local region images.
  • the eye key points include left eye key points and right eye key points;
  • the processor 202 determines a first distance between the left-eye key point and the right-eye key point.
  • the images corresponding to the demarcated left eye area and right eye area are determined as eye area images.
  • the processor 202 performs edge detection processing on the images corresponding to the left eye region and the right eye region;
  • the images corresponding to the demarcated left eye area and right eye area are determined as eye area images
  • the processor 202 determines the first coordinate information of the left eye key point and the second coordinate information of the right eye key point;
  • a first distance between the left-eye keypoint and the right-eye keypoint is determined according to the first coordinate information and the second coordinate information.
  • the key points of the mouth include a key point of the left corner of the mouth and a key point of the right corner of the mouth;
  • the processor 202 determines the reference point of the mouth according to the key point of the left corner of the mouth and the key point of the right corner of the mouth; and,
  • An image corresponding to the demarcated mouth region is determined as a mouth region image.
  • the processor 202 calculates the average gray value of the image corresponding to the predetermined mouth region.
  • the processor 202 determines the third coordinate information of the key point of the left corner of the mouth and the fourth coordinate information of the key point of the right corner of the mouth;
  • the reference point of the mouth is determined according to the third coordinate information and the fourth coordinate information.
  • the processor 202 performs normalization processing on the image of the partial region of the target face to obtain an image of a preset size
  • a grayscale vector is determined according to the grayscale histogram, and the grayscale vector is determined as the grayscale feature of the target face local area image.
  • the processor 202 inputs the grayscale feature of the eye region image into a first living body detection model for detection processing, and obtains a first detection result;
  • both the first detection result and the second detection result indicate the existence of a living body, it is determined that the detection result of the living body of the face of the object to be detected is the detection pass.
  • the processor 202 performs a merge process on the grayscale feature of the eye region image and the grayscale feature of the mouth region to obtain a combined grayscale feature;
  • the combined grayscale feature is input into a third living body detection model for detection processing to obtain a face living body detection result of the object to be detected.
  • the processor 202 acquires infrared images of multiple samples; wherein the samples include positive samples and negative samples, the positive samples are living objects, and the negative samples are prosthetic objects ;as well as,
  • a training process is performed based on the grayscale feature of the partial region image of the face to be trained to obtain the living body detection model.
  • the face living body detection device obtained by the embodiment of the present application obtains the target face local area image by extracting the face local area of the infrared image of the object to be detected;
  • the trained live detection model performs live detection processing to determine whether the object to be detected is a live human face.
  • one or more embodiments of the present application also provide a face liveness detection system, the system includes: a data processor and an image sensor supporting infrared image acquisition, wherein:
  • the image sensor is used to collect the infrared image of the object to be detected, and send the collected infrared image to the data processor;
  • the data processor is configured to receive the infrared image collected by the image sensor, perform extraction processing of the partial area of the face on the infrared image, and obtain an image of the partial area of the target face; determine the image of the partial area of the target face.
  • Grayscale feature perform liveness detection processing according to the grayscale feature and the pre-trained liveness detection model, and obtain the face liveness detection result of the object to be detected.
  • FIG. 14 is a schematic structural diagram of an electronic device according to an embodiment of the description.
  • the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, and of course may also include other business required hardware.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and runs it, forming a face liveness detection device on a logical level.
  • this application does not exclude other implementations, such as logic devices or a combination of software and hardware. hardware or logic device.
  • the bus can be an ISA (Industry Standard Architecture, industry standard architecture) bus, a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or an EISA (Extended Industry Standard Architecture, extended industry standard structure) bus and the like.
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one bidirectional arrow is shown in FIG. 14, but it does not mean that there is only one bus or one type of bus.
  • Memory is used to store programs.
  • the program may include program code, and the program code includes computer operation instructions.
  • the memory which may include read-only memory and random access memory, provides instructions and data to the processor.
  • the memory may include high-speed random-access memory (Random-Access Memory, RAM), and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
  • a processor configured to execute the program stored in the memory, and specifically execute:
  • the living body detection process is performed according to the grayscale feature and the pre-trained living body detection model, and the face living body detection result of the object to be detected is obtained.
  • the above-mentioned method performed by the apparatus for detecting a face living body disclosed in the embodiment shown in FIG. 13 of the present application may be applied to a processor, or implemented by a processor.
  • a processor may be an integrated circuit chip with signal processing capabilities.
  • each step of the above-mentioned method can be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processor, DSP), dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores one or more programs, and the one or more programs, when included as a plurality of application programs When the electronic device executes, the electronic device is made to execute the face liveness detection method provided by any of the embodiments corresponding to FIG. 1 , FIG. 4 to FIG. 12 .
  • the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • the apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • Memory may include forms of non-persistent memory, random access memory (RAM) and/or non-volatile memory in computer readable media, such as read only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
  • RAM random access memory
  • ROM read only memory
  • flash RAM flash memory
  • Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.
  • the embodiments of the present application may be provided as a method, a system or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.

Abstract

Embodiments of the present application provide a facial liveness detection method and apparatus, and a device. The method comprises: acquiring an infrared image of an object to be detected; extracting a facial local region from the acquired infrared image to obtain a target facial local region image; and determining a grayscale feature of the target facial local region image, and performing liveness detection according to the determined grayscale feature and a pre-trained liveness detection model to obtain a facial liveness detection result of said object. According to the embodiments of the present application, liveness detection based on facial local features is achieved, thereby avoiding the problem of a decrease in the accuracy of the liveness detection result due to that a face is partially blocked.

Description

人脸活体检测方法、装置及设备Face liveness detection method, device and equipment
本申请要求于2020年11月26日提交中国专利局、申请号为202011349369.5申请名称为“人脸活体检测方法、装置及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number of 202011349369.5, which was filed with the China Patent Office on November 26, 2020, and the application name is "Facial Liveness Detection Method, Apparatus and Equipment", the entire contents of which are incorporated herein by reference. middle.
技术领域technical field
本申请涉及生物识别技术领域,尤其涉及一种人脸活体检测方法、装置及设备。The present application relates to the technical field of biometrics, and in particular, to a method, device and device for detecting a face living body.
背景技术Background technique
活体检测对于人们来说并不陌生,可以用于防止不法分子恶意利用伪造的他人生物特征进行身份认证,如利用窃取的照片、网上录制的视频、制作的假体面具等进行身份认证。当前的活体检测过程中,通常是基于人脸的整体特征进行检测,上述方案中,当被检测用户没有完全摘掉口罩、围巾等,或因其他因素,造成人脸部分被遮挡时,检测的准确性会有所下降,因此基于人脸的局部特征进行活体检测显得尤为重要。Liveness detection is not unfamiliar to people, and can be used to prevent criminals from maliciously using forged biometrics of others for identity authentication, such as using stolen photos, videos recorded online, and prosthetic masks made for identity authentication. In the current live detection process, the detection is usually based on the overall characteristics of the face. In the above scheme, when the detected user does not completely remove the mask, scarf, etc., or the face is partially occluded due to other factors, the detected The accuracy will decrease, so it is particularly important to perform live detection based on the local features of the face.
发明内容SUMMARY OF THE INVENTION
本申请实施例的目的是提供一种人脸活体检测方法、装置及设备,以基于人脸的局部区域实现活体检测。The purpose of the embodiments of the present application is to provide a method, device, and device for detecting a living body of a human face, so as to realize the detection of living body based on a local area of a human face.
为解决上述技术问题,本申请实施例是这样实现的:In order to solve the above-mentioned technical problems, the embodiments of the present application are implemented as follows:
第一方面,本申请实施例提供了一种人脸活体检测方法,包括:In a first aspect, an embodiment of the present application provides a face liveness detection method, including:
获取待检测对象的红外图像;Obtain the infrared image of the object to be detected;
对所述红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像;Extracting the partial area of the face on the infrared image to obtain an image of the partial area of the target face;
确定所述目标人脸局部区域图像的灰度特征;Determine the grayscale feature of the target face local area image;
根据所述灰度特征和预先训练的活体检测模型进行活体检测处理,得到所述待检测对象的人脸活体检测结果。The living body detection process is performed according to the grayscale feature and the pre-trained living body detection model, and the face living body detection result of the object to be detected is obtained.
第二方面,本申请实施例提供了一种人脸活体检测装置,包括:In a second aspect, an embodiment of the present application provides a face liveness detection device, including:
图像传感器,用于获取待检测对象的红外图像;an image sensor for acquiring an infrared image of the object to be detected;
处理器,用于对所述红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像;确定所述目标人脸局部区域图像的灰度特征;根据所述灰度特征和预先训练的活体检测模型进行检测处理,得到所述待检测对象的人脸活体检测结果。a processor, configured to perform extraction processing on the infrared image of the partial region of the face, to obtain an image of the partial region of the target face; determine the grayscale feature of the image of the partial region of the target face; according to the grayscale feature and pre-training The living body detection model is used for detection processing to obtain the face living body detection result of the object to be detected.
第三方面,本申请实施例提供了一种电子设备,包括:处理器、通信接口、存储器和通信总线;其中,处理器,通信接口,存储器通过总线完成相互间的通信;存储器,用于存放计算机程序;处理器,用于执行存储器上所存放的程序,实现上述人脸活体检测方法的步骤。In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a communication interface, a memory, and a communication bus; wherein the processor, the communication interface, and the memory communicate with each other through the bus; the memory is used to store A computer program; a processor for executing the program stored in the memory to realize the steps of the above-mentioned method for detecting a living body of a human face.
第四方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现上述人脸活体检测方法的步骤。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, implements the steps of the above-mentioned method for detecting a living body of a human face.
第五方面,本申请实施例提供一种人脸活体检测系统,所述系统包括:数据处理器、支持采集红外图像的图像传感器,其中:In a fifth aspect, an embodiment of the present application provides a face liveness detection system, the system includes: a data processor and an image sensor that supports collecting infrared images, wherein:
所述图像传感器,用于采集待检测对象的红外图像,向所述数据处理器发送所采集的红外图像;The image sensor is used to collect the infrared image of the object to be detected, and send the collected infrared image to the data processor;
所述数据处理器,用于接收所述图像传感器采集的红外图像,对所述红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像;确定所述目标人脸局部区域图像的灰度特征;根据所述灰度特征和预先训练的活体检测模型进行活体检测处理,得到所述待检测对象的人脸活体检测结果。The data processor is configured to receive the infrared image collected by the image sensor, perform extraction processing of the partial area of the face on the infrared image, and obtain an image of the partial area of the target face; determine the image of the partial area of the target face. Grayscale feature; perform liveness detection processing according to the grayscale feature and the pre-trained liveness detection model, and obtain the face liveness detection result of the object to be detected.
在本申请实施例中,通过对待检测对象的红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像;并基于目标人脸局部区域图像的灰度特征和预先训练的活体检测模型进行活体检测处理,以确定待检测对象是否为人脸活体。由此,基于人脸的局部特征实现了准确有效的活体检 测,有效的解决了因人脸部分区域被遮挡而造成的检测结果的准确性下降等问题。In the embodiment of the present application, an infrared image of the object to be detected is extracted and processed to obtain a partial face region image of the target face; and based on the grayscale features of the target face partial region image and a pre-trained living body detection model A living body detection process is performed to determine whether the object to be detected is a living body of a human face. As a result, accurate and effective living body detection is realized based on the local features of the face, and the problems such as the decrease in the accuracy of the detection results caused by the occlusion of part of the face area are effectively solved.
附图说明Description of drawings
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的实施例。In order to more clearly illustrate the embodiments of the present application and the technical solutions of the prior art, the following briefly introduces the drawings required in the embodiments and the prior art. Obviously, the drawings in the following description are only the For some embodiments of the application, for those of ordinary skill in the art, other embodiments can also be obtained according to these drawings without creative efforts.
图1为本申请实施例提供的第一种人脸活体检测方法的流程示意图;1 is a schematic flowchart of a first method for detecting a human face liveness provided in an embodiment of the present application;
图2为本申请实施例提供的红外图像中活体与假体的眼部区域的示意图;2 is a schematic diagram of the eye region of a living body and a prosthesis in an infrared image provided by an embodiment of the present application;
图3为本申请实施例提供的红外图像中活体与假体的嘴部区域的示意图;3 is a schematic diagram of the mouth region of a living body and a prosthesis in an infrared image provided by an embodiment of the present application;
图4为本申请实施例提供的第二种人脸活体检测方法的流程示意图;4 is a schematic flowchart of a second method for detecting a human face liveness provided by an embodiment of the present application;
图5为本申请实施例提供的第三种人脸活体检测方法的流程示意图;FIG. 5 is a schematic flowchart of a third face liveness detection method according to an embodiment of the present application;
图6为本申请实施例提供的第四种人脸活体检测方法的流程示意图;FIG. 6 is a schematic flowchart of a fourth method for detecting a living body of a human face provided by an embodiment of the present application;
图7为本申请实施例提供的第五种人脸活体检测方法的流程示意图;7 is a schematic flowchart of a fifth method for detecting liveness of a human face provided by an embodiment of the present application;
图8为本申请实施例提供的第六种人脸活体检测方法的流程示意图;FIG. 8 is a schematic flowchart of a sixth method for detecting liveness of a human face provided by an embodiment of the present application;
图9为本申请实施例提供的第七种人脸活体检测方法的流程示意图;FIG. 9 is a schematic flowchart of a seventh method for detecting a living body of a human face provided by an embodiment of the present application;
图10为本申请实施例提供的第八种人脸活体检测方法的流程示意图;10 is a schematic flowchart of an eighth method for detecting liveness of a human face provided by an embodiment of the present application;
图11为本申请实施例提供的第九种人脸活体检测方法的流程示意图;FIG. 11 is a schematic flowchart of a ninth face liveness detection method provided by an embodiment of the present application;
图12为本申请实施例提供的第十种人脸活体检测方法的流程示意图;FIG. 12 is a schematic flowchart of a tenth face liveness detection method provided by an embodiment of the present application;
图13为本申请实施例提供的一种人脸活体检测装置的模块组成示意图;FIG. 13 is a schematic diagram of the module composition of a face liveness detection apparatus provided by an embodiment of the application;
图14为本申请实施例提供的一种电子设备的组成示意图。FIG. 14 is a schematic diagram of the composition of an electronic device according to an embodiment of the present application.
具体实施方式Detailed ways
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the objectives, technical solutions, and advantages of the present application more clearly understood, the present application will be described in further detail below with reference to the accompanying drawings and examples. Obviously, the described embodiments are only a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of the present application.
图1为本申请实施例提供的第一种人脸活体检测方法的流程示意图,参见图1,该方法具体可以包括如下步骤:FIG. 1 is a schematic flowchart of a first face liveness detection method provided by an embodiment of the present application. Referring to FIG. 1 , the method may specifically include the following steps:
步骤102,获取待检测对象的红外图像。 Step 102, acquiring an infrared image of the object to be detected.
本申请实施例提供的人脸活体检测方法可应用于人脸活体检测装置,该人脸活体检测装置包括图像传感器,如200万像素的图像传感器等。该人脸活体检测装置可应用于门禁系统、考勤系统、安防系统等各种需要进行活体检测的系统中。当需要确定待检测对象是否为活体时,该人脸检测装置采集待检测对象的红外图像。The face liveness detection method provided by the embodiments of the present application can be applied to a face liveness detection apparatus, and the face liveness detection apparatus includes an image sensor, such as a 2-megapixel image sensor. The face liveness detection device can be applied to various systems that require liveness detection, such as access control systems, attendance systems, and security systems. When it is necessary to determine whether the object to be detected is a living body, the face detection device collects an infrared image of the object to be detected.
需要指出的是,考虑到红外光波段所对应的红外图像中的人脸,受外部光照影响小,特征稳定,眼睛、嘴巴等局部区域的信息明显,因此,本申请实施例中采用红外光波段下的红外图像进行人脸活体检测。在实际的应用中,红外图像的采集波段可以是850nm-1200nm之间的红外光波段。It should be pointed out that, considering that the face in the infrared image corresponding to the infrared light band is less affected by external light, the features are stable, and the information of local areas such as eyes and mouth is obvious. Therefore, the infrared light band is used in the embodiment of the present application. The infrared image below is used for face liveness detection. In practical applications, the acquisition band of the infrared image can be the infrared light band between 850nm-1200nm.
步骤104,对获取的红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像。 Step 104 , extracting the partial region of the face on the acquired infrared image to obtain an image of the partial region of the target face.
其中,上述目标人脸局部区域图像可以是:眼部区域图像、鼻部区域图像、嘴部区域图像、额头区域图像等。Wherein, the above-mentioned partial area image of the target face may be: an image of an eye area, an image of a nose area, an image of a mouth area, an image of a forehead area, and the like.
为了更好的理解活体与假体的红外图像中人脸局部区域的差异,本申请实施例以活体、纸质照片假体、硅胶假体为例进行示例性说明。参见图2,其中a为活体的红外图像中的眼部区域图像,b为纸质照片假体的红外图像中的眼部区域图像,c为硅胶假体的红外图像中的眼部区域。由图2可以看出,对于活体而言,能够明显的区分出眼睛的瞳孔、虹膜及巩膜三个区域。而对于纸质照片假体来说,眼睛只有黑白两个部分,无法区分出瞳孔和虹膜,且眼睛的灰度分布与活体眼睛的灰度 分布相差较大。对于硅胶假体来说,虽然能够区分出瞳孔、虹膜及巩膜三个区域,但是眼睛的灰度分布与活体眼睛的灰度分布有明显的差异。可见,红外图像中活体与假体的眼部区域存在明显差异。In order to better understand the difference between the local area of the human face in the infrared images of the living body and the prosthesis, the embodiments of the present application take the living body, a paper photo prosthesis, and a silicone prosthesis as examples to illustrate. Referring to FIG. 2 , a is the eye area image in the infrared image of the living body, b is the eye area image in the infrared image of the paper photo prosthesis, and c is the eye area in the infrared image of the silicone prosthesis. It can be seen from FIG. 2 that, for a living body, the pupil, iris and sclera of the eye can be clearly distinguished. For paper photo prostheses, the eye has only two parts, black and white, and the pupil and iris cannot be distinguished, and the gray distribution of the eye is quite different from that of the living eye. For silicone prostheses, although the pupil, iris and sclera can be distinguished, the gray distribution of the eye is significantly different from that of the living eye. It can be seen that there are obvious differences between the ocular regions of the living body and the prosthesis in the infrared images.
在实际应用中,申请人发现不只是眼部区域存在明显差异,嘴部区域同样存在差异,以活体和硅胶假体为例进行示例性说明,参见图3,其中e为活体的红外图像中的嘴部区域图像,f为硅胶假体的红外图像中的嘴部区域。由图3可以看出,活体的嘴巴与周围脸部的灰度可明显分区,嘴巴较暗,周围脸部较亮;而硅胶假体则是相反,嘴巴比周围脸部要亮很多,虽然这与硅胶材质以及涂料的反射特性相关,但是对不同材质和具有不同反射特性的硅胶假体的反复实验证明,该灰度的差异足以用于活体检测。In practical applications, the applicant found that not only the eye area has obvious differences, but also the mouth area. The living body and the silicone prosthesis are used as examples for illustration, see Figure 3, where e is the infrared image of the living body. Mouth area image, f is the mouth area in the infrared image of the silicone prosthesis. As can be seen from Figure 3, the grayscale of the living mouth and the surrounding face can be clearly divided, the mouth is darker, and the surrounding face is brighter; while the silicone prosthesis is the opposite, the mouth is much brighter than the surrounding face, although this It is related to the reflective properties of silicone materials and coatings, but repeated experiments on different materials and silicone prostheses with different reflective properties have proved that the difference in grayscale is sufficient for live detection.
基于此,本申请实施例中,对获取的红外图像进行眼部区域图像和嘴部区域图像的提取处理,将得到的眼部区域图像和嘴部区域图像确定为目标人脸局部区域图像。Based on this, in the embodiment of the present application, the obtained infrared image is subjected to extraction processing of the eye region image and the mouth region image, and the obtained eye region image and mouth region image are determined as the target face local region image.
步骤106,确定目标人脸局部区域图像的灰度特征。Step 106: Determine the grayscale feature of the partial region image of the target face.
本申请的一个实施例中,可以绘制目标人脸局部区域图像的灰度直方图,基于绘制的灰度直方图确定目标人脸局部区域图像的灰度特征。可以理解的是,上述绘制目标人脸局部区域图像的灰度直方图,指的是统计目标人脸局部区域图像的灰度直方图信息,也就是,在实际应用过程中,并不需要真实地将上述直方图绘制出来,而是仅需要统计得到上述目标人脸局部区域图像的灰度直方图的统计信息。这样在后续确定灰度特征时,可以基于上述所统计的目标人脸局部区域图像的灰度直方图的统计信息,确定目标人脸局部区域图像的灰度特征。In one embodiment of the present application, a grayscale histogram of the image of the partial region of the target face can be drawn, and the grayscale feature of the image of the partial region of the target face can be determined based on the drawn grayscale histogram. It can be understood that the above-mentioned drawing of the grayscale histogram of the target face local area image refers to the statistics of the grayscale histogram information of the target face local area image, that is, in the actual application process, it is not necessary to actually To draw the above-mentioned histogram, it is only necessary to obtain the statistical information of the gray-scale histogram of the above-mentioned partial region image of the target face. In this way, when the gray level feature is subsequently determined, the gray level feature of the target face partial region image can be determined based on the above-stated statistical information of the gray level histogram of the target face partial region image.
本申请的另一实施例中,还可以利用预设的特征提取算法,提取目标人脸局部区域图像的灰度特征,其中,上述特征提取算法可以是SIFT(Scale-invariant feature transform,尺度不变特征转换)算法、LBP(Local Binary Pattern,局部二值模式)算法等。In another embodiment of the present application, a preset feature extraction algorithm can also be used to extract the grayscale features of the partial region image of the target face, wherein the feature extraction algorithm can be SIFT (Scale-invariant feature transform, the scale is invariant Feature transformation) algorithm, LBP (Local Binary Pattern, local binary pattern) algorithm, etc.
步骤108,根据确定的灰度特征和预先训练的活体检测模型进行活体检测处理,得到待检测对象的人脸活体检测结果。Step 108: Perform living body detection processing according to the determined grayscale feature and the pre-trained living body detection model to obtain a face living body detection result of the object to be detected.
具体的,将确定的灰度特征输入至预先训练的活体检测模型进行活体检测处理,得到待检测对象的人脸活体检测结果。Specifically, the determined grayscale feature is input into a pre-trained living body detection model for living body detection processing, and a face living body detection result of the object to be detected is obtained.
本申请实施例中,通过对待检测对象的红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像;并基于目标人脸局部区域图像的灰度特征和预先训练的活体检测模型进行活体检测处理,以确定待检测对象是否为人脸活体。由此,基于人脸的局部特征实现了准确有效的活体检测,有效的解决了因人脸部分区域被遮挡而造成的检测结果的准确性下降等问题。In the embodiment of the present application, the infrared image of the object to be detected is subjected to the extraction processing of the partial region of the face to obtain the image of the partial region of the target face; Liveness detection processing to determine whether the object to be detected is a face living body. As a result, accurate and effective living body detection is realized based on the local features of the face, which effectively solves the problems such as the decrease in the accuracy of the detection result caused by the occlusion of the partial area of the face.
为了准确的提取目标人脸局部区域图像,以提升活体检测的准确性,本申请一个或多个实施例中,首先进行人脸的关键点检测,并基于检测的关键点进行人脸局部区域图像的提取。具体的,如图4所示,步骤S104可以包括以下步骤104-2至步骤104-8:In order to accurately extract the target face local area image to improve the accuracy of living body detection, in one or more embodiments of the present application, the key point detection of the face is firstly performed, and based on the detected key points, the face local area image is performed. extraction. Specifically, as shown in FIG. 4 , step S104 may include the following steps 104-2 to 104-8:
步骤104-2,根据预设方式对红外图像进行关键点检测处理,得到眼部关键点和嘴部关键点。Step 104-2: Perform key point detection processing on the infrared image according to a preset method to obtain eye key points and mouth key points.
其中,眼部关键点包括左眼关键点和右眼关键点,嘴部关键点包括左嘴角关键点和右嘴角关键点。关键点的检测方式和具体的检测过程可以在实际应用中根据需要自行设定,如采用ASM(Active Shape Model,主动形状模型)算法、CPR(Cascaded Pose Regression,级联姿态回归)算法等进行关键点检测,由于ASM算法和CPR算法均为本领域技术人员熟知的技术手段,故这里不再详述。Among them, the eye key points include left eye key points and right eye key points, and the mouth key points include left mouth corner key points and right mouth corner key points. The detection method and specific detection process of key points can be set by themselves in practical applications, such as using ASM (Active Shape Model, active shape model) algorithm, CPR (Cascaded Pose Regression, cascade pose regression) algorithm, etc. For point detection, since both the ASM algorithm and the CPR algorithm are technical means well known to those skilled in the art, they will not be described in detail here.
步骤104-4,根据得到的眼部关键点提取眼部区域图像。Step 104-4, extract the eye region image according to the obtained eye key points.
具体的,如图5所示,步骤104-4可以包括以下步骤104-42至步骤104-44:Specifically, as shown in FIG. 5 , step 104-4 may include the following steps 104-42 to 104-44:
步骤104-42,确定左眼关键点与右眼关键点之间的第一距离,分别以左眼关键点和右眼关键点为中心,根据第一距离和第一预设比例划定左眼区域和右眼区域。Steps 104-42, determine the first distance between the left eye key point and the right eye key point, respectively take the left eye key point and the right eye key point as the center, and demarcate the left eye according to the first distance and the first preset ratio area and the right eye area.
具体的,可以确定左眼关键点的第一坐标信息和右眼关键点的第二坐标信息;根据第一坐标信息和第二坐标信息确定左眼关键点和右眼关键点之间的第一距离;根据第一距离和第一预设比例计 算待划定的矩形区域的长度和宽度,以左眼关键点为中心,根据计算的长度和宽度划定矩形的左眼区域;以及以右眼关键点为中心,根据计算的长度和宽度划定矩形的右眼区域。Specifically, the first coordinate information of the left eye key point and the second coordinate information of the right eye key point can be determined; the first coordinate information between the left eye key point and the right eye key point can be determined according to the first coordinate information and the second coordinate information distance; calculate the length and width of the rectangular area to be demarcated according to the first distance and the first preset ratio, take the key point of the left eye as the center, and delineate the left eye area of the rectangle according to the calculated length and width; and use the right eye The key point is the center, and the right eye area of the rectangle is delineated according to the calculated length and width.
其中,每一关键点的坐标信息可以为:该关键点在图像中的像素坐标,也可以是该关键点在预设的图像坐标系中的坐标。The coordinate information of each key point may be: the pixel coordinates of the key point in the image, or may be the coordinates of the key point in a preset image coordinate system.
第一距离可以认为是左眼与右眼之间的瞳间距;第一预设比例包括第一预设宽度比例和第一预设长度比例;在实际应用中,可以根据需要自行设定上述第一预设宽度比例和第一预设长度比例。例如,将瞳间距记为IPD,第一预设宽度比例可以是IPD的0.16-0.18倍,第一预设长度比例可以是IPD的0.4-0.45倍。第一预设长度比例可以理解为:眼睛所在的矩形区域的长度相对左右眼之间距离的比例,第一预设宽度比例可以理解为:眼睛所在的矩形区域的宽度相对左右眼之间距离的比例。The first distance can be considered as the interpupillary distance between the left eye and the right eye; the first preset ratio includes the first preset width ratio and the first preset length ratio; in practical applications, the above-mentioned No. A preset width ratio and a first preset length ratio. For example, denoting the interpupillary distance as IPD, the first preset width ratio may be 0.16-0.18 times the IPD, and the first preset length ratio may be 0.4-0.45 times the IPD. The first preset length ratio can be understood as: the ratio of the length of the rectangular area where the eyes are located to the distance between the left and right eyes, and the first preset width ratio can be understood as: the width of the rectangular area where the eyes are located relative to the distance between the left and right eyes. Proportion.
步骤104-44,将划定的左眼区域和右眼区域所对应的图像确定为眼部区域图像。Steps 104-44: Determine the images corresponding to the demarcated left eye area and right eye area as eye area images.
具体的,可以将划定的左眼区域所对应的图像确定为左眼区域图像,将划定的右眼区域所对应的图像确定为右眼区域图像,将左眼区域图像和右眼区域图像确定为眼部区域图像。Specifically, the image corresponding to the demarcated left-eye area may be determined as the left-eye area image, the image corresponding to the demarcated right-eye area may be determined as the right-eye area image, and the left-eye area image and the right-eye area image may be determined as the right-eye area image. Determined as the eye area image.
考虑到纸质照片假体的红外图像中,难以区分出眼睛的瞳孔、虹膜及巩膜三个区域,并且基于眼睛的特殊结构,对于活体而言,如图2中a图的虚线圆圈所示,能够基于灰度特征拟合出两个圆形。基于此,在得到眼部区域图像的过程中,可通过圆形的拟合处理对待检测对象进行活体检测。具体而言,如图6所示,步骤104-44可以包括以下步骤104-442至步骤104-446:Considering that in the infrared image of the paper photo prosthesis, it is difficult to distinguish the pupil, iris and sclera of the eye, and based on the special structure of the eye, for a living body, as shown in the dotted circle in Figure 2a, Two circles can be fitted based on grayscale features. Based on this, in the process of obtaining the image of the eye region, the object to be detected can be detected by a circular fitting process. Specifically, as shown in FIG. 6, step 104-44 may include the following steps 104-442 to 104-446:
步骤104-442,对左眼区域和右眼区域所对应的图像进行边缘检测处理,基于边缘检测处理的处理结果进行圆形拟合处理;若圆形拟合处理失败,则执行步骤104-444;若圆形拟合处理成功,则执行步骤104-446。Step 104-442, perform edge detection processing on the images corresponding to the left eye area and the right eye area, and perform circular fitting processing based on the processing result of the edge detection processing; if the circular fitting processing fails, perform steps 104-444 ; If the circle fitting process is successful, execute steps 104-446.
其中,可以基于拉普拉斯算子、高斯算子等边缘检测算子,对左眼区域和右眼区域所对应的图像进行边缘检测处理,得到上述图像中的边缘,作为边缘检测处理的处理结果,并基于检测的边缘进行圆形的拟合。由于边缘检测处理的过程是现有技术,故本申请中不再赘述。Wherein, edge detection processing can be performed on the images corresponding to the left eye region and the right eye region based on edge detection operators such as Laplacian operator and Gaussian operator, so as to obtain the edge in the above image, which is used as the processing of edge detection processing As a result, a circular fit is performed based on the detected edges. Since the process of edge detection processing is in the prior art, it will not be repeated in this application.
上述基于检测的边缘进行圆形的拟合,可以理解为,对检测到的边缘进行曲线拟合,判断能够拟合为圆形,若能够成功将上述边缘拟合为圆形,则认为圆形拟合处理成功;若难以将上述边缘拟合为圆形,则认为圆形拟合处理失败。The above-mentioned circular fitting based on the detected edge can be understood as performing curve fitting on the detected edge and judging that it can be fitted as a circle. If the above-mentioned edge can be successfully fitted as a circle, it is considered that a circle is The fitting process is successful; if it is difficult to fit the above edge into a circle, it is considered that the circle fitting process fails.
步骤104-444,确定不存在活体,待检测对象的人脸活体检测结果为检测未通过。Steps 104-444, it is determined that there is no living body, and the face living body detection result of the object to be detected is that the detection fails.
具体的,若圆形拟合失败,则说明在红外图像中难以区分出眼睛的瞳孔、虹膜及巩膜三个区域,进而说明上述红外图像可能是假体的红外图像,因此可以确定不存在活体,相应地,确定活体检测结果为:检测未通过。Specifically, if the circle fitting fails, it means that it is difficult to distinguish the pupil, iris and sclera of the eye in the infrared image, which further means that the above infrared image may be the infrared image of the prosthesis, so it can be determined that there is no living body. Correspondingly, it is determined that the living body detection result is: the detection fails.
步骤104-446,将划定的左眼区域和右眼区域所对应的图像确定为眼部区域图像。Steps 104-446: Determine the images corresponding to the demarcated left-eye area and right-eye area as eye area images.
具体的,若圆形拟合成功,则说明在红外图像中可以区分出眼睛的瞳孔、虹膜及巩膜三个区域,可以进一步地进行后续的活体检测。这种情况下,可以通过检测眼部关键点,并基于眼部关键点提取眼部区域图像,确保了眼部区域图像的提取准确性。Specifically, if the circle fitting is successful, it means that the pupil, the iris and the sclera of the eye can be distinguished in the infrared image, and subsequent living detection can be further performed. In this case, the eye region image can be extracted by detecting the eye key points and extracting the eye region image based on the eye key point, so as to ensure the extraction accuracy of the eye region image.
通过在眼部区域图像的提取过程中进行圆形拟合处理,能够及时的识别出纸质照片等形式的假体,在圆形拟合失败的情况下能够直接判断不存在活体,无需再执行后续的相关处理,能够提升检测效率。By performing the circle fitting process in the extraction process of the eye area image, the prosthesis in the form of paper photos can be identified in time, and if the circle fitting fails, it can be directly judged that there is no living body, and there is no need to perform Subsequent related processing can improve the detection efficiency.
步骤104-6,根据得到的嘴部关键点提取嘴部区域图像。Step 104-6, extract the mouth region image according to the obtained mouth key points.
具体的,如图7所示,步骤104-6可以包括以下步骤104-62至步骤104-66:Specifically, as shown in FIG. 7 , step 104-6 may include the following steps 104-62 to 104-66:
步骤104-62,根据左侧嘴角关键点和右侧嘴角关键点确定嘴巴的基准点。Steps 104-62: Determine the reference point of the mouth according to the key point of the left corner of the mouth and the key point of the right corner of the mouth.
具体的,可以确定左侧嘴角关键点的第三坐标信息和右侧嘴角关键点的第四坐标信息,根据第三坐标信息和第四坐标信息确定嘴巴的基准点。如计算第三坐标信息与第四坐标信息的平均值,将 计算的平均值所对应的点确定为嘴巴的基准点。Specifically, the third coordinate information of the key point of the left corner of the mouth and the fourth coordinate information of the key point of the right corner of the mouth can be determined, and the reference point of the mouth is determined according to the third coordinate information and the fourth coordinate information. For example, the average value of the third coordinate information and the fourth coordinate information is calculated, and the point corresponding to the calculated average value is determined as the reference point of the mouth.
本申请的一个实施例中,若获取红外图像时待检测对象处于倾斜状态,则可以设定用于校正上述倾斜状态的调节系数,在得到上述平均值时,将平均值与设定的调节系数相乘,得到校正后待检测对象嘴巴的基准点。In an embodiment of the present application, if the object to be detected is in a tilted state when the infrared image is acquired, an adjustment coefficient for correcting the tilted state may be set, and when the above average value is obtained, the average value is compared with the set adjustment coefficient Multiply to obtain the reference point of the mouth of the object to be detected after correction.
步骤104-64,确定左侧嘴角关键点与右侧嘴角关键点之间的第二距离,以基准点为中心,根据第二距离和第二预设比例划定嘴部区域。Steps 104-64: Determine the second distance between the key point of the left corner of the mouth and the key point of the right corner of the mouth, take the reference point as the center, and define the mouth area according to the second distance and the second preset ratio.
具体的,根据左侧嘴角关键点的第三坐标信息和右侧嘴角关键点的第四坐标信息,计算左侧嘴角关键点与右侧嘴角关键点之间的第二距离;根据第二距离和第二预设比例计算待划定的矩形区域的长度和宽度,以嘴巴的基准点为中心,根据计算的长度和宽度划定矩形的嘴部区域。其中,第二预设比例可以包括宽度比例,将第二距离确定为待划定的矩形区域的长度,将该第二距离的长度记为d,第二预设比例可以是d的0.32-0.36倍。上述第二预设比例可以理解为:嘴巴所在的矩形区域的宽度相对左右两侧嘴角关键点之间距离的比例。Specifically, according to the third coordinate information of the key point of the left corner of the mouth and the fourth coordinate information of the key point of the right corner of the mouth, the second distance between the key point of the left corner of the mouth and the key point of the right corner of the mouth is calculated; according to the second distance and The second preset ratio calculates the length and width of the rectangular area to be demarcated, takes the reference point of the mouth as the center, and demarcates the rectangular mouth area according to the calculated length and width. The second preset ratio may include a width ratio, the second distance is determined as the length of the rectangular area to be demarcated, the length of the second distance is denoted as d, and the second preset ratio may be 0.32-0.36 of d times. The above-mentioned second preset ratio can be understood as: the ratio of the width of the rectangular region where the mouth is located to the distance between the key points of the corners of the mouth on the left and right sides.
需要指出的是,上述第一预设比例和第二预设比例均可在实际应用中根据需要自行设定。并且当检测的关键点不只是一个左眼关键点、一个右眼关键点、一个左侧嘴角关键点和一个右侧嘴角关键点,而是很多关键点时,还可以基于检测的关键点划定眼部区域和嘴部区域。例如,以左眼区域为例,在存在多个左眼关键点的情况下,可以将包含上述多个左眼关键点的预设图形区域作为左眼区域,上述预设图形区域可以为圆形区域、矩形区域、椭圆形区域等。It should be pointed out that the above-mentioned first preset ratio and second preset ratio can be set by themselves according to actual needs. And when the detected key points are not just a left eye key point, a right eye key point, a left mouth corner key point and a right mouth corner key point, but many key points, it can also be delineated based on the detected key points. Eye area and mouth area. For example, taking the left-eye area as an example, in the case where there are multiple left-eye key points, the preset graphic area including the above-mentioned multiple left-eye key points can be used as the left-eye area, and the above-mentioned preset graphic area can be a circle area, rectangular area, oval area, etc.
步骤104-66将划定的嘴部区域所对应的图像确定为嘴部区域图像。Steps 104-66 determine the image corresponding to the demarcated mouth region as the mouth region image.
考虑到如硅胶等假体的嘴部区域的灰度分布与活体的嘴部区域的灰度分布具有明显差异,因此在得到嘴部区域图像的过程中,可通过灰度特征判断待检测对象是否为活体。具体的,如图8所示,本申请的一个实施例中,步骤104-66可以包括以下步骤104-662至步骤104-666:Considering that the grayscale distribution of the mouth region of the prosthesis such as silicone is significantly different from that of the living body, in the process of obtaining the mouth region image, the grayscale feature can be used to determine whether the object to be detected is for the living body. Specifically, as shown in FIG. 8, in an embodiment of the present application, steps 104-66 may include the following steps 104-662 to 104-666:
步骤104-662,统计划定的嘴部区域所对应的图像的灰度均值,确定统计的灰度均值是否符合预设条件;若否,则执行步骤104-664,若是,则执行步骤104-666。Step 104-662: Calculate the grayscale mean value of the image corresponding to the planned mouth region, and determine whether the statistical grayscale mean value meets the preset condition; if not, execute step 104-664; 666.
具体的,确定划定的嘴部区域所对应的图像的像素点的数量以及每个像素点的灰度值,根据确定的数量和灰度值计算灰度均值;确定计算的灰度均值是否大于预设的灰度值,若是,则确定统计的灰度均值不符合预设条件,若否,则确定统计的灰度均值符合预设条件。Specifically, determine the number of pixels of the image corresponding to the delineated mouth area and the gray value of each pixel, and calculate the gray mean value according to the determined number and gray value; determine whether the calculated gray mean value is greater than The preset grayscale value, if yes, it is determined that the statistical grayscale mean value does not meet the preset condition; if not, it is determined that the statistical grayscale mean value meets the preset condition.
步骤104-664,确定不存在活体,待检测对象的人脸活体检测结果为检测未通过。Steps 104-664, it is determined that there is no living body, and the face living body detection result of the object to be detected is that the detection fails.
具体的,若统计的灰度均值不符合预设条件,则说明上述嘴部区域可能是硅胶等假体的嘴部区域,因此可以确定不存在活体,相应地,确定活体检测结果为:检测未通过。Specifically, if the statistical mean gray value does not meet the preset conditions, it means that the above-mentioned mouth area may be the mouth area of a prosthesis such as silicone, so it can be determined that there is no living body, and accordingly, it is determined that the living body detection result is: pass.
步骤104-666,将划定的嘴部区域所对应的图像确定为嘴部区域图像。Steps 104-666, determining the image corresponding to the demarcated mouth region as the mouth region image.
具体的,若统计的灰度均值符合预设条件,则可以进一步地进行后续的活体检测。这种情况下,可以通过检测嘴部关键点,并基于嘴部关键点提取嘴部区域图像,确保了提取的嘴部区域图像的准确性。通过计算嘴部区域的灰度均值并判断灰度均值是否满足预设条件,能够及时的检测出硅胶等假体,这种情况下无需再执行后续的检测处理,能够提升检测效率。Specifically, if the statistical gray mean value meets the preset condition, the subsequent living body detection can be further performed. In this case, by detecting the key points of the mouth and extracting the image of the mouth region based on the key points of the mouth, the accuracy of the extracted image of the mouth region is ensured. By calculating the average gray value of the mouth area and judging whether the gray average value meets the preset conditions, the prosthesis such as silicone can be detected in time. In this case, subsequent detection processing is not required, and the detection efficiency can be improved.
步骤104-8,将提取的眼部区域图像和嘴部区域图像确定为目标人脸局部区域图像。Step 104-8: Determine the extracted eye region image and mouth region image as the target face partial region image.
需要说明的是,上述步骤S104-4与步骤S104-6的执行顺序可以互换,还可以同时执行。It should be noted that, the execution order of the above steps S104-4 and S104-6 may be interchanged, and may also be executed simultaneously.
考虑到不同成像设备的成像距离不同,如有的成像设备适用于30厘米-100厘米的范围内成像,有的设备适用于30厘米-200厘米甚至更远的范围成像;而成像距离不同,所得的红外图像中人脸图像的大小往往不同,为了避免因人脸图像大小的差异而导致的检测差异,本申请一个或多个实施例中,对提取到的目标人脸局部区域图像进行归一化处理后进行后续处理。具体的,如图9所示,步骤106包括以下步骤106-2至步骤106-6:Considering the different imaging distances of different imaging devices, some imaging devices are suitable for imaging in the range of 30 cm-100 cm, and some devices are suitable for imaging in the range of 30 cm-200 cm or even further; The size of the face image in the infrared image is often different. In order to avoid the detection difference caused by the difference in the size of the face image, in one or more embodiments of the present application, the extracted target face local area images are normalized. Subsequent processing is carried out after the chemical treatment. Specifically, as shown in FIG. 9 , step 106 includes the following steps 106-2 to 106-6:
步骤106-2,对目标人脸局部区域图像进行归一化处理,得到预设大小的图像。Step 106-2, performing normalization processing on the partial region image of the target face to obtain an image of a preset size.
具体的,对左眼区域图像、右眼区域图像、嘴部区域图像分别进行归一化处理,得到预设大小的图像。其中,归一化后图像的尺寸大小在实际应用中可以根据需要自行设定。Specifically, the image of the left eye area, the image of the right eye area, and the image of the mouth area are respectively normalized to obtain an image of a preset size. Among them, the size of the normalized image can be set by itself in practical applications.
步骤106-4,绘制预设大小的图像的灰度直方图。Step 106-4, drawing a grayscale histogram of an image of a preset size.
具体的,对于每个预设大小的图像,可以确定其每个像素点的灰度值,并统计各灰度值对应的像素点的数量,以灰度值为横坐标、每一灰度值对应像素点的数量为纵坐标,根据确定的灰度值和统计的数量绘制灰度直方图。Specifically, for each preset size image, the gray value of each pixel can be determined, and the number of pixels corresponding to each gray value can be counted. The number of corresponding pixel points is the ordinate, and the gray histogram is drawn according to the determined gray value and the number of statistics.
需要指出的是,在绘制灰度直方图时,还可以指定灰度值的区间,如只需统计[20,220]之间的灰度值对应的像素点的数量,并绘制灰度直方图。It should be pointed out that when drawing a grayscale histogram, you can also specify the interval of grayscale values. For example, you only need to count the number of pixels corresponding to the grayscale values between [20, 220], and draw a grayscale histogram. .
可以理解的是,上述绘制预设大小的图像的灰度直方图,指的是统计预设大小的图像的灰度直方图信息,也就是,在实际应用过程中,并不需要真实地将上述直方图绘制出来,而是仅需要统计得到上述预设大小的图像的灰度直方图的统计信息。这样在后续确定灰度特征时,可以基于上述所统计的预设大小的图像的灰度直方图的统计信息,确定目标人脸局部区域图像的灰度特征。It can be understood that the above-mentioned drawing of the grayscale histogram of the image of the preset size refers to the statistics of the grayscale histogram information of the image of the preset size, that is, in the actual application process, it is not necessary to actually The histogram is drawn, but only the statistical information of the grayscale histogram of the image of the preset size needs to be obtained by statistics. In this way, when the gray level feature is subsequently determined, the gray level feature of the image in the partial region of the target face can be determined based on the above-stated statistical information of the gray level histogram of the image of the preset size.
步骤106-6,根据灰度直方图确定灰度向量,将灰度向量确定为目标人脸局部区域图像的灰度特征。Step 106-6: Determine a grayscale vector according to the grayscale histogram, and determine the grayscale vector as the grayscale feature of the target face local area image.
具体的,分别根据绘制的左眼区域图像、右眼区域图像、嘴部区域图像的灰度直方图,各确定1个一维的灰度向量,得到三个一维的灰度向量,上述一维向量中不同项的元素对应于不同的灰度值,上述一维向量中每个元素表示相应灰度值的像素点的数量;将确定的三个该一维的灰度向量确定为目标人脸局部区域图像的灰度特征。Specifically, according to the drawn grayscale histograms of the left-eye area image, the right-eye area image, and the mouth area image, each one-dimensional grayscale vector is determined, and three one-dimensional grayscale vectors are obtained. Elements of different items in the one-dimensional vector correspond to different grayscale values, and each element in the above one-dimensional vector represents the number of pixels with corresponding grayscale values; the determined three one-dimensional grayscale vectors are determined as the target person Grayscale features of face local area images.
为了提升检测的准确性,在得到目标人脸局部区域图像的灰度特征之后,可根据该灰度特征基于预先训练的活体检测模型进行活体检测处理。In order to improve the detection accuracy, after obtaining the grayscale feature of the target face local area image, the liveness detection process can be performed based on the pretrained liveness detection model according to the grayscale feature.
本申请的一个实施例中,活体检测模型包括第一活体检测模型和第二活体检测模型,其中,第一活体检测模型用于对眼部区域进行活体检测,第二活体检测模型用于对嘴部区域进行活体检测。相应的,如图10所示,步骤108可以包括以下步骤108-2至步骤108-6:In an embodiment of the present application, the living body detection model includes a first living body detection model and a second living body detection model, wherein the first living body detection model is used to perform live body detection on the eye region, and the second live body detection model is used for the mouth region. Liveness testing in the area. Correspondingly, as shown in FIG. 10 , step 108 may include the following steps 108-2 to 108-6:
步骤108-2,将眼部区域图像的灰度特征输入至第一活体检测模型进行检测处理,得到第一检测结果。Step 108-2: Input the grayscale feature of the eye region image into the first living body detection model for detection processing to obtain a first detection result.
具体的,将左眼区域图像的灰度向量和右眼区域图像的灰度向量输入至第一活体检测模型进行检测处理,得到第一检测结果。Specifically, the grayscale vector of the left-eye region image and the grayscale vector of the right-eye region image are input into the first living body detection model for detection processing to obtain the first detection result.
步骤108-4,将嘴部区域图像的灰度特征输入至第二活体检测模型进行检测处理,得到第二检测结果。Step 108-4: Input the grayscale feature of the mouth region image into the second living body detection model for detection processing, and obtain a second detection result.
具体的,将嘴部区域图像的灰度向量输入至第二活体检测模型进行检测处理,得到第二检测结果。Specifically, the grayscale vector of the mouth region image is input into the second living body detection model for detection processing, and the second detection result is obtained.
步骤108-6,若第一检测结果和第二检测结果均表征存在活体,则确定待检测对象的人脸活体检测结果为检测通过。Step 108-6, if both the first detection result and the second detection result indicate the existence of a living body, it is determined that the detection result of the living body of the face of the object to be detected is the detection passed.
当第一检测结果、第二检测结果中的任意一个或多个表征不存在活体时,则确定待检测对象的人脸活体检测结果为检测未通过。When any one or more of the first detection result and the second detection result indicate that there is no living body, it is determined that the detection result of the living body of the face of the object to be detected is that the detection fails.
本申请的一个或多个实施例中,活体检测模型可以包括第三检测模型,该第三检测模型用于对眼部区域和嘴部区域同时进行活体检测。相应地,如图11所示,步骤108可以包括以下步骤108-8和步骤108-10:In one or more embodiments of the present application, the living body detection model may include a third detection model, where the third detection model is used to simultaneously perform live body detection on the eye region and the mouth region. Correspondingly, as shown in FIG. 11 , step 108 may include the following steps 108-8 and 108-10:
步骤108-8,对眼部区域图像的灰度特征和嘴部区域的灰度特征进行合并处理,得到合并灰度特征。Step 108-8, combine the grayscale features of the eye region image and the grayscale features of the mouth region to obtain combined grayscale features.
其中,合并的方式在实际应用中可以根据需要自行设定,例如,将眼部区域图像的灰度向量与嘴部区域图像的灰度向量进行拼接,得到合并灰度向量,将合并灰度向量确定为合并灰度特征。Among them, the merging method can be set by itself in practical applications. For example, the gray vector of the eye area image and the gray vector of the mouth area image are spliced to obtain a combined gray vector, and the combined gray vector Determined to merge grayscale features.
步骤108-10,将合并灰度特征输入至第三活体检测模型进行检测处理,得到待检测对象的人脸活体检测结果。In step 108-10, the combined grayscale feature is input into the third living body detection model for detection processing, and a face living body detection result of the object to be detected is obtained.
具体的,将合并灰度向量输入至第三活体检测模型进行检测处理,得到待检测对象的人脸活体检测结果。Specifically, the merged grayscale vector is input into the third living body detection model for detection processing, and a face living body detection result of the object to be detected is obtained.
需要指出的,对于一些安全性要求不是特别高的场景,还可以仅通过上述第一活体检测模型进行活体检测,或者仅通过上述第二活体检测模型进行活体检测;相应的,在获取到待检测对象的红外图像后,可以仅提取眼部区域图像或仅提取嘴部区域图像。对于模型的数量和使用方式,均可在实际应用中根据需要自行设定。It should be pointed out that, for some scenarios where the security requirements are not particularly high, it is also possible to perform liveness detection only through the above-mentioned first liveness detection model, or only through the above-mentioned second liveness detection model; After the infrared image of the subject, only the eye area image or only the mouth area image can be extracted. The number and usage of the models can be set according to actual needs.
为了实现基于人脸局部区域的高准确性的活体检测,本申请一个或多个实施例中,如图12所示,步骤102之前还可以包括以下步骤100-2至步骤100-8:In order to achieve high-accuracy detection of living bodies based on partial regions of faces, in one or more embodiments of the present application, as shown in FIG. 12 , before step 102 , the following steps 100-2 to 100-8 may be further included:
步骤100-2,获取多个样本的红外图像;其中,样本包括正样本和负样本,正样本为活体对象,负样本为假体对象。Step 100-2, acquiring infrared images of multiple samples, wherein the samples include positive samples and negative samples, the positive samples are living objects, and the negative samples are prosthetic objects.
可以通过红外图像的采集设备采集多个样本的红外图像,也可以从网络中获取多个样本的红外图像。对于该多个样本的红外图像的获取方式,本申请中不做具体限定。Infrared images of multiple samples can be collected through an infrared image collection device, or infrared images of multiple samples can be obtained from a network. The manner of acquiring the infrared images of the multiple samples is not specifically limited in this application.
步骤100-4,对每个样本的红外图像进行人脸局部区域的提取处理,得到待训练的人脸局部区域图像。Step 100-4, extracting the partial region of the face on the infrared image of each sample to obtain the partial region image of the face to be trained.
其中,人脸局部区域图像的提取处理过程,与前述目标人脸局部区域图像的提取过程相同,可参见前述相关描述,重复之处这里不再赘述。The process of extracting the partial region image of the face is the same as the process of extracting the image of the partial region of the target face, and reference may be made to the above-mentioned related description, and the repeated parts will not be repeated here.
步骤100-6,确定待训练的人脸局部区域图像的灰度特征。Step 100-6: Determine the grayscale feature of the partial region image of the face to be trained.
其中,灰度特征的确定方式与前述目标人脸局部区域图像的灰度特征的确定方式相同,可参见前述相关描述,重复之处这里不再赘述。The manner of determining the grayscale feature is the same as the foregoing manner of determining the grayscale feature of the partial region image of the target face, and reference may be made to the foregoing related descriptions, and repeated details will not be repeated here.
步骤100-8,根据预设的训练方式,基于待训练的人脸局部区域图像的灰度特征进行训练处理,得到活体检测模型。Step 100-8, according to a preset training method, perform training processing based on the grayscale features of the partial region images of the face to be trained to obtain a living body detection model.
具体的,根据每个样本的活体、假体属性,对待训练的人脸局部区域图像的灰度特征进行标签的标记处理,以标记对应的样本是活体或假体。将带有标签的灰度特征划分为训练集和测试集,基于训练集采用SVM(Support Vector Machine,支持向量机)算法进行活体检测模型的训练处理,得到初始活体检测模型。基于测试集对得到的初始活体检测模型进行测试处理,得到测试结果。若确定测试结果符合预设条件,则将该初始活体检测模型确定为最终的活体检测模型;若确定测试结果不符合预设条件,则基于训练集重新进行活体检测模型的训练处理,直至测试结果符合预设条件,得到最终的活体检测模型。其中,预设条件可以为:测试结果的准确率大于预设准确率等。由于模型的训练过程为本领域技术人员熟知的技术手段,故对于模型的训练过程本申请中不再进行详细描述。Specifically, according to the attributes of the living body and the prosthesis of each sample, labeling is performed on the grayscale features of the partial region image of the face to be trained, so as to mark the corresponding sample as a living body or a prosthesis. The grayscale features with labels are divided into training sets and test sets, and the SVM (Support Vector Machine, Support Vector Machine) algorithm is used to train the living body detection model based on the training set, and the initial living body detection model is obtained. Based on the test set, the obtained initial live detection model is tested and processed to obtain a test result. If it is determined that the test result meets the preset conditions, the initial living body detection model is determined as the final living body detection model; if it is determined that the test result does not meet the preset conditions, the training process of the living body detection model is performed again based on the training set until the test result is reached. In accordance with the preset conditions, the final living detection model is obtained. Wherein, the preset condition may be: the accuracy rate of the test result is greater than the preset accuracy rate, and the like. Since the training process of the model is a technical means well known to those skilled in the art, the training process of the model will not be described in detail in this application.
需要指出的是,当活体检测模型包括第一活体检测模型和第二活体检测模型时,则分别根据待训练的眼部区域图像的灰度特征进行第一活体检测模型的训练,以及根据待训练的嘴部区域图像的灰度特征进行第二活体检测模型的训练。活体检测模型的训练方式也不限为上述SVM训练,还可以基于神经网络进行训练等。对于活体检测模型的训练方式可以在实际应用中根据需要自行设定。It should be pointed out that, when the living body detection model includes the first living body detection model and the second living body detection model, the training of the first living body detection model is performed according to the grayscale features of the eye region images to be trained, and the The grayscale features of the mouth region image are used to train the second in vivo detection model. The training method of the living body detection model is also not limited to the above-mentioned SVM training, and may also be trained based on a neural network. The training method of the living body detection model can be set according to the needs in practical applications.
本申请实施例中,通过对待检测对象的红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像;并基于目标人脸局部区域图像的灰度特征和预先训练的活体检测模型进行活体检测处理,以确定待检测对象是否为人脸活体。由此,基于人脸的局部特征实现了准确有效的活体检测,有效的解决了因人脸部分区域被遮挡而造成的检测结果的准确性下降等问题。In the embodiment of the present application, the infrared image of the object to be detected is subjected to the extraction processing of the partial region of the face to obtain the image of the partial region of the face of the target; Liveness detection processing to determine whether the object to be detected is a face living body. Therefore, accurate and effective living body detection is realized based on the local features of the face, and the problems such as the decrease in the accuracy of the detection result caused by the occlusion of a part of the face area are effectively solved.
基于相同的技术构思,本申请一个或多个实施例还提供了一种人脸活体检测装置,图13本申请一个或多个实施例还提供一种人脸活体检测装置的模块组成示意图,如图13所示,该装置包括:Based on the same technical concept, one or more embodiments of the present application also provide a face liveness detection device. FIG. 13 also provides a schematic diagram of the module composition of a face liveness detection device in one or more embodiments of the present application, such as As shown in Figure 13, the device includes:
图像传感器201,用于获取待检测对象的红外图像;An image sensor 201, used for acquiring an infrared image of an object to be detected;
处理器202,用于对所述红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像;确定所述目标人脸局部区域图像的灰度特征;根据所述灰度特征和预先训练的活体检测模型进行检测处理,得到所述待检测对象的人脸活体检测结果。The processor 202 is configured to perform extraction processing of the partial area of the face on the infrared image to obtain an image of the partial area of the target face; determine the grayscale feature of the image of the partial area of the target face; The trained living body detection model is subjected to detection processing to obtain a face living body detection result of the object to be detected.
本申请实施例提供的人脸活体检测装置,通过对待检测对象的红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像;并基于目标人脸局部区域图像的灰度特征和预先训练的活体检测模型进行活体检测处理,以确定待检测对象是否为人脸活体。由此,基于人脸的局部特征实现了准确有效的活体检测,有效的解决了因人脸部分区域被遮挡而造成的检测结果的准确性下降等问题。The face living body detection device provided by the embodiment of the present application obtains the target face local area image by extracting the face local area of the infrared image of the object to be detected; The trained live detection model performs live detection processing to determine whether the object to be detected is a live human face. As a result, accurate and effective living body detection is realized based on the local features of the face, which effectively solves the problems such as the decrease in the accuracy of the detection result caused by the occlusion of the partial area of the face.
本申请的一个实施例中,所述处理器202,根据所述眼部关键点提取眼部区域图像;以及,In an embodiment of the present application, the processor 202 extracts an eye region image according to the eye key points; and,
根据所述嘴部关键点提取嘴部区域图像;extracting the mouth region image according to the mouth key points;
将所述眼部区域图像和所述嘴部区域图像确定为目标人脸局部区域图像。The eye region image and the mouth region image are determined as target face local region images.
本申请的一个实施例中,所述眼部关键点包括左眼关键点和右眼关键点;In an embodiment of the present application, the eye key points include left eye key points and right eye key points;
相应地,所述处理器202,确定所述左眼关键点与所述右眼关键点之间的第一距离;以及,Correspondingly, the processor 202 determines a first distance between the left-eye key point and the right-eye key point; and,
分别以所述左眼关键点和所述右眼关键点为中心,根据所述第一距离和第一预设比例划定左眼区域和右眼区域;Taking the left-eye key point and the right-eye key point as the center, respectively, delimiting the left-eye area and the right-eye area according to the first distance and the first preset ratio;
将划定的所述左眼区域和右眼区域所对应的图像确定为眼部区域图像。The images corresponding to the demarcated left eye area and right eye area are determined as eye area images.
本申请的一个实施例中,所述处理器202,对所述左眼区域和所述右眼区域所对应的图像进行边缘检测处理;以及,In an embodiment of the present application, the processor 202 performs edge detection processing on the images corresponding to the left eye region and the right eye region; and,
基于所述边缘检测处理的处理结果进行圆形拟合处理;performing circle fitting processing based on the processing result of the edge detection processing;
若所述圆形拟合处理成功,则将划定的所述左眼区域和右眼区域所对应的图像确定为眼部区域图像;If the circle fitting process is successful, then the images corresponding to the demarcated left eye area and right eye area are determined as eye area images;
若所述圆形拟合处理失败,则确定不存在活体,所述待检测对象的人脸活体检测结果为检测未通过。If the circle fitting process fails, it is determined that there is no living body, and the face living body detection result of the object to be detected is that the detection fails.
本申请的一个实施例中,所述处理器202,确定所述左眼关键点的第一坐标信息和所述右眼关键点的第二坐标信息;以及,In an embodiment of the present application, the processor 202 determines the first coordinate information of the left eye key point and the second coordinate information of the right eye key point; and,
根据所述第一坐标信息和所述第二坐标信息确定所述左眼关键点和所述右眼关键点之间的第一距离。A first distance between the left-eye keypoint and the right-eye keypoint is determined according to the first coordinate information and the second coordinate information.
本申请的一个实施例中,所述嘴部关键点包括左侧嘴角关键点和右侧嘴角关键点;In an embodiment of the present application, the key points of the mouth include a key point of the left corner of the mouth and a key point of the right corner of the mouth;
相应地,所述处理器202,根据所述左侧嘴角关键点和所述右侧嘴角关键点确定嘴巴的基准点;以及,Correspondingly, the processor 202 determines the reference point of the mouth according to the key point of the left corner of the mouth and the key point of the right corner of the mouth; and,
确定所述左侧嘴角关键点与所述右侧嘴角关键点之间的第二距离;determining a second distance between the key point of the left corner of the mouth and the key point of the right corner of the mouth;
以所述基准点为中心,根据所述第二距离和第二预设比例划定嘴部区域;Taking the reference point as the center, delimiting the mouth region according to the second distance and the second preset ratio;
将划定的所述嘴部区域所对应的图像确定为嘴部区域图像。An image corresponding to the demarcated mouth region is determined as a mouth region image.
本申请的一个实施例中,所述处理器202,统计划定的所述嘴部区域所对应的图像的灰度均值;以及,In an embodiment of the present application, the processor 202 calculates the average gray value of the image corresponding to the predetermined mouth region; and,
确定所述灰度均值是否符合预设条件;determining whether the gray mean value meets a preset condition;
若是,则将划定的所述嘴部区域所对应的图像确定为嘴部区域图像;If yes, then determine the image corresponding to the demarcated mouth region as the mouth region image;
若否,则确定不存在活体,所述待检测对象的人脸活体检测结果为检测未通过。If not, it is determined that there is no living body, and the face living body detection result of the object to be detected is that the detection fails.
本申请的一个实施例中,所述处理器202,确定所述左侧嘴角关键点的第三坐标信息和所述右侧嘴角关键点的第四坐标信息;In an embodiment of the present application, the processor 202 determines the third coordinate information of the key point of the left corner of the mouth and the fourth coordinate information of the key point of the right corner of the mouth;
根据所述第三坐标信息和所述第四坐标信息确定嘴巴的基准点。The reference point of the mouth is determined according to the third coordinate information and the fourth coordinate information.
本申请的一个实施例中,所述处理器202,对所述目标人脸局部区域图像进行归一化处理,得到预设大小的图像;In an embodiment of the present application, the processor 202 performs normalization processing on the image of the partial region of the target face to obtain an image of a preset size;
绘制所述预设大小的图像的灰度直方图;drawing a grayscale histogram of the image of the preset size;
根据所述灰度直方图确定灰度向量,将所述灰度向量确定为所述目标人脸局部区域图像的灰度特征。A grayscale vector is determined according to the grayscale histogram, and the grayscale vector is determined as the grayscale feature of the target face local area image.
本申请的一个实施例中,所述处理器202,将所述眼部区域图像的灰度特征输入至第一活体检测模型进行检测处理,得到第一检测结果;In an embodiment of the present application, the processor 202 inputs the grayscale feature of the eye region image into a first living body detection model for detection processing, and obtains a first detection result;
将所述嘴部区域图像的灰度特征输入至第二活体检测模型进行检测处理,得到第二检测结果;inputting the grayscale feature of the mouth region image into the second living body detection model for detection processing to obtain a second detection result;
若所述第一检测结果和所述第二检测结果均表征存在活体,则确定所述待检测对象的人脸活体检测结果为检测通过。If both the first detection result and the second detection result indicate the existence of a living body, it is determined that the detection result of the living body of the face of the object to be detected is the detection pass.
本申请的一个实施例中,所述处理器202,对所述眼部区域图像的灰度特征和所述嘴部区域的灰度特征进行合并处理,得到合并灰度特征;In an embodiment of the present application, the processor 202 performs a merge process on the grayscale feature of the eye region image and the grayscale feature of the mouth region to obtain a combined grayscale feature;
将所述合并灰度特征输入至第三活体检测模型进行检测处理,得到所述待检测对象的人脸活体检测结果。The combined grayscale feature is input into a third living body detection model for detection processing to obtain a face living body detection result of the object to be detected.
本申请的一个实施例中,所述处理器202,获取多个样本的红外图像;其中,所述样本包括正样本和负样本,所述正样本为活体对象,所述负样本为假体对象;以及,In an embodiment of the present application, the processor 202 acquires infrared images of multiple samples; wherein the samples include positive samples and negative samples, the positive samples are living objects, and the negative samples are prosthetic objects ;as well as,
对每个样本的红外图像进行人脸局部区域的提取处理,得到待训练的人脸局部区域图像;Extracting the face local area on the infrared image of each sample to obtain the face local area image to be trained;
确定所述待训练的人脸局部区域图像的灰度特征;Determine the grayscale feature of the face local area image to be trained;
根据预设的训练方式,基于所述待训练的人脸局部区域图像的灰度特征进行训练处理,得到所述活体检测模型。According to a preset training method, a training process is performed based on the grayscale feature of the partial region image of the face to be trained to obtain the living body detection model.
本申请实施例提供的人脸活体检测装置,通过对待检测对象的红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像;并基于目标人脸局部区域图像的灰度特征和预先训练的活体检测模型进行活体检测处理,以确定待检测对象是否为人脸活体。由此,基于人脸的局部特征实现了准确有效的活体检测,有效的解决了因人脸部分区域被遮挡而造成的检测结果的准确性下降等问题。The face living body detection device provided by the embodiment of the present application obtains the target face local area image by extracting the face local area of the infrared image of the object to be detected; The trained live detection model performs live detection processing to determine whether the object to be detected is a live human face. As a result, accurate and effective living body detection is realized based on the local features of the face, which effectively solves the problems such as the decrease in the accuracy of the detection result caused by the occlusion of the partial area of the face.
另外,对于上述装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。而且,应当注意的是,本申请的装置的各个部件中,根据其要实现的功能而对其中的部件进行了逻辑划分,但是,本申请不受限于此,可以根据需要对各个部件进行重新划分或者组合。In addition, for the above-mentioned apparatus embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for related parts. Moreover, it should be noted that, among the components of the device of the present application, the components are logically divided according to the functions to be implemented, but the present application is not limited to this, and each component can be re-configured as required. Divide or combine.
基于相同的技术构思,本申请一个或多个实施例还提供了一种人脸活体检测系统,所述系统包括:数据处理器、支持采集红外图像的图像传感器,其中:Based on the same technical concept, one or more embodiments of the present application also provide a face liveness detection system, the system includes: a data processor and an image sensor supporting infrared image acquisition, wherein:
所述图像传感器,用于采集待检测对象的红外图像,向所述数据处理器发送所采集的红外图像;The image sensor is used to collect the infrared image of the object to be detected, and send the collected infrared image to the data processor;
所述数据处理器,用于接收所述图像传感器采集的红外图像,对所述红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像;确定所述目标人脸局部区域图像的灰度特征;根据所述灰度特征和预先训练的活体检测模型进行活体检测处理,得到所述待检测对象的人脸活体检测结果。The data processor is configured to receive the infrared image collected by the image sensor, perform extraction processing of the partial area of the face on the infrared image, and obtain an image of the partial area of the target face; determine the image of the partial area of the target face. Grayscale feature; perform liveness detection processing according to the grayscale feature and the pre-trained liveness detection model, and obtain the face liveness detection result of the object to be detected.
图14为本说明一实施例提供的一种电子设备的结构示意图,参见图14,该电子设备包括处理器、内部总线、网络接口、内存以及非易失性存储器,当然还可能包括其他业务所需要的硬件。处理器从非易失性存储器中读取对应的计算机程序到内存中然后运行,在逻辑层面上形成人脸活体检测装置。当然,除了软件实现方式之外,本申请并不排除其他实现方式,比如逻辑器件抑或软硬件结合的方式等等,也就是说以下处理流程的执行主体并不限定于各个逻辑单元,也可以是硬件或逻辑器件。FIG. 14 is a schematic structural diagram of an electronic device according to an embodiment of the description. Referring to FIG. 14 , the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, and of course may also include other business required hardware. The processor reads the corresponding computer program from the non-volatile memory into the memory and runs it, forming a face liveness detection device on a logical level. Of course, in addition to software implementations, this application does not exclude other implementations, such as logic devices or a combination of software and hardware. hardware or logic device.
网络接口、处理器和存储器可以通过总线系统相互连接。总线可以是ISA(Industry Standard Architecture,工业标准体系结构)总线、PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(Extended Industry Standard Architecture,扩展工业标准结构)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图14中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。Network interfaces, processors and memories can be interconnected through a bus system. The bus can be an ISA (Industry Standard Architecture, industry standard architecture) bus, a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or an EISA (Extended Industry Standard Architecture, extended industry standard structure) bus and the like. The bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one bidirectional arrow is shown in FIG. 14, but it does not mean that there is only one bus or one type of bus.
存储器用于存放程序。具体地,程序可以包括程序代码,所述程序代码包括计算机操作指令。存储器可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据。存储器可能包含高速随机存取存储器(Random-Access Memory,RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少1个磁盘存储器。Memory is used to store programs. Specifically, the program may include program code, and the program code includes computer operation instructions. The memory, which may include read-only memory and random access memory, provides instructions and data to the processor. The memory may include high-speed random-access memory (Random-Access Memory, RAM), and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
处理器,用于执行所述存储器存放的程序,并具体执行:a processor, configured to execute the program stored in the memory, and specifically execute:
获取待检测对象的红外图像;Obtain the infrared image of the object to be detected;
对所述红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像;Extracting the partial area of the face on the infrared image to obtain an image of the partial area of the target face;
确定所述目标人脸局部区域图像的灰度特征;Determine the grayscale feature of the target face local area image;
根据所述灰度特征和预先训练的活体检测模型进行活体检测处理,得到所述待检测对象的人脸活体检测结果。The living body detection process is performed according to the grayscale feature and the pre-trained living body detection model, and the face living body detection result of the object to be detected is obtained.
上述如本申请图13所示实施例揭示的人脸活体检测装置执行的方法可以应用于处理器中,或者由处理器实现。处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。The above-mentioned method performed by the apparatus for detecting a face living body disclosed in the embodiment shown in FIG. 13 of the present application may be applied to a processor, or implemented by a processor. A processor may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above-mentioned method can be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software. The above-mentioned processor can be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processor, DSP), dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The methods, steps, and logic block diagrams disclosed in the embodiments of this application can be implemented or executed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor. The software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art. The storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
基于相同的技术构思,本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储一个或多个程序,所述一个或多个程序当被包括多个应用程序的电子设备执行时,使得所述电子设备执行图1、图4至图12任一所对应的实施例提供的人脸活体检测方法。Based on the same technical concept, the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores one or more programs, and the one or more programs, when included as a plurality of application programs When the electronic device executes, the electronic device is made to execute the face liveness detection method provided by any of the embodiments corresponding to FIG. 1 , FIG. 4 to FIG. 12 .
本申请中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例、装置实施例、电子设备实施例、计算机可读存储介质实施例、计算机程序产品实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。Each embodiment in this application is described in a progressive manner, and the same and similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the system embodiments, apparatus embodiments, electronic device embodiments, computer-readable storage medium embodiments, and computer program product embodiments, since they are basically similar to the method embodiments, the descriptions are relatively simple, and the relevant parts Please refer to the partial description of the method embodiment.
上述对本申请特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。The foregoing describes specific embodiments of the present application. Other embodiments are within the scope of the appended claims. In some cases, the actions or steps recited in the claims can be performed in an order different from that in the embodiments and still achieve desirable results. Additionally, the processes depicted in the figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by those skilled in the art, the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present application. It will be understood that each process and/or block in the flowchart illustrations and/or block diagrams, and combinations of processes and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。Memory may include forms of non-persistent memory, random access memory (RAM) and/or non-volatile memory in computer readable media, such as read only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology. Information may be computer readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。It should also be noted that the terms "comprising", "comprising" or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article or device comprising a series of elements includes not only those elements, but also Other elements not expressly listed, or inherent to such a process, method, article of manufacture or apparatus are also included. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in the process, method, article of manufacture, or device that includes the element.
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。It will be appreciated by those skilled in the art that the embodiments of the present application may be provided as a method, a system or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。The above descriptions are merely examples of the present application, and are not intended to limit the present application. Various modifications and variations of this application are possible for those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this application shall be included within the scope of the claims of this application.

Claims (16)

  1. 一种人脸活体检测方法,其特征在于,包括:A face liveness detection method, comprising:
    获取待检测对象的红外图像;Obtain the infrared image of the object to be detected;
    对所述红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像;Extracting the partial area of the face on the infrared image to obtain an image of the partial area of the target face;
    确定所述目标人脸局部区域图像的灰度特征;Determine the grayscale feature of the target face local area image;
    根据所述灰度特征和预先训练的活体检测模型进行活体检测处理,得到所述待检测对象的人脸活体检测结果。The living body detection process is performed according to the grayscale feature and the pre-trained living body detection model, and the face living body detection result of the object to be detected is obtained.
  2. 根据权利要求1所述的方法,其特征在于,所述对所述红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像,包括:The method according to claim 1, wherein, performing the extraction processing of the partial region of the face on the infrared image to obtain an image of the partial region of the face of the target, comprising:
    根据预设方式对所述红外图像进行关键点检测处理,得到眼部关键点和嘴部关键点;Perform key point detection processing on the infrared image according to a preset method to obtain eye key points and mouth key points;
    根据所述眼部关键点提取眼部区域图像;extracting an eye region image according to the eye key points;
    根据所述嘴部关键点提取嘴部区域图像;extracting the mouth region image according to the mouth key points;
    将所述眼部区域图像和所述嘴部区域图像确定为目标人脸局部区域图像。The eye region image and the mouth region image are determined as target face local region images.
  3. 根据权利要求2所述的方法,其特征在于,所述眼部关键点包括左眼关键点和右眼关键点;The method of claim 2, wherein the eye key points include a left eye key point and a right eye key point;
    所述根据所述眼部关键点提取眼部区域图像,包括:The extracting the eye region image according to the eye key points includes:
    确定所述左眼关键点与所述右眼关键点之间的第一距离;determining a first distance between the left-eye keypoint and the right-eye keypoint;
    分别以所述左眼关键点和所述右眼关键点为中心,根据所述第一距离和第一预设比例划定左眼区域和右眼区域;Taking the left-eye key point and the right-eye key point as the center, respectively, delimiting the left-eye area and the right-eye area according to the first distance and the first preset ratio;
    将划定的所述左眼区域和右眼区域所对应的图像确定为眼部区域图像。The images corresponding to the demarcated left eye area and right eye area are determined as eye area images.
  4. 根据权利要求3所述的方法,其特征在于,所述将划定的所述左眼区域和所述右眼区域所对应的图像确定为眼部区域图像,包括:The method according to claim 3, wherein the determining the images corresponding to the demarcated left eye region and the right eye region as the eye region images comprises:
    对所述左眼区域和所述右眼区域所对应的图像进行边缘检测处理;performing edge detection processing on the images corresponding to the left eye area and the right eye area;
    基于所述边缘检测处理的处理结果进行圆形拟合处理;performing circle fitting processing based on the processing result of the edge detection processing;
    若所述圆形拟合处理成功,则将划定的所述左眼区域和右眼区域所对应的图像确定为眼部区域图像;If the circle fitting process is successful, then the images corresponding to the demarcated left eye area and right eye area are determined as eye area images;
    若所述圆形拟合处理失败,则确定不存在活体,所述待检测对象的人脸活体检测结果为检测未通过。If the circle fitting process fails, it is determined that there is no living body, and the face living body detection result of the object to be detected is that the detection fails.
  5. 根据权利要求3所述的方法,其特征在于,所述确定所述左眼关键点与所述右眼关键点之间的第一距离,包括:The method according to claim 3, wherein the determining the first distance between the left-eye key point and the right-eye key point comprises:
    确定所述左眼关键点的第一坐标信息和所述右眼关键点的第二坐标信息;determining the first coordinate information of the left eye key point and the second coordinate information of the right eye key point;
    根据所述第一坐标信息和所述第二坐标信息确定所述左眼关键点和所述右眼关键点之间的第一距离。A first distance between the left-eye keypoint and the right-eye keypoint is determined according to the first coordinate information and the second coordinate information.
  6. 根据权利要求2所述的方法,其特征在于,所述嘴部关键点包括左侧嘴角关键点和右侧嘴角关键点;The method according to claim 2, wherein the key points of the mouth include a key point of the left corner of the mouth and a key point of the right corner of the mouth;
    所述根据所述嘴部关键点提取嘴部区域图像,包括:The extracting the mouth region image according to the mouth key points includes:
    根据所述左侧嘴角关键点和所述右侧嘴角关键点确定嘴巴的基准点;Determine the reference point of the mouth according to the key point of the left corner of the mouth and the key point of the right corner of the mouth;
    确定所述左侧嘴角关键点与所述右侧嘴角关键点之间的第二距离;determining a second distance between the key point of the left corner of the mouth and the key point of the right corner of the mouth;
    以所述基准点为中心,根据所述第二距离和第二预设比例划定嘴部区域;Taking the reference point as the center, delimiting the mouth region according to the second distance and the second preset ratio;
    将划定的所述嘴部区域所对应的图像确定为嘴部区域图像。An image corresponding to the demarcated mouth region is determined as a mouth region image.
  7. 根据权利要求6所述的方法,其特征在于,所述将划定的所述嘴部区域所对应的图像确定为嘴部区域图像,包括:The method according to claim 6, wherein the determining the image corresponding to the demarcated mouth region as the mouth region image comprises:
    统计划定的所述嘴部区域所对应的图像的灰度均值;The gray mean value of the image corresponding to the mouth region planned by the system;
    确定所述灰度均值是否符合预设条件;determining whether the grayscale mean value meets a preset condition;
    若是,则将划定的所述嘴部区域所对应的图像确定为嘴部区域图像;If yes, then determine the image corresponding to the demarcated mouth region as the mouth region image;
    若否,则确定不存在活体,所述待检测对象的人脸活体检测结果为检测未通过。If not, it is determined that there is no living body, and the face living body detection result of the object to be detected is that the detection fails.
  8. 根据权利要求6所述的方法,其特征在于,所述根据所述左侧嘴角关键点和所述右侧嘴角关键点确定嘴巴的基准点,包括:The method according to claim 6, wherein the determining the reference point of the mouth according to the key point of the left corner of the mouth and the key point of the right corner of the mouth comprises:
    确定所述左侧嘴角关键点的第三坐标信息和所述右侧嘴角关键点的第四坐标信息;Determine the third coordinate information of the key point of the left corner of the mouth and the fourth coordinate information of the key point of the right corner of the mouth;
    根据所述第三坐标信息和所述第四坐标信息确定嘴巴的基准点。The reference point of the mouth is determined according to the third coordinate information and the fourth coordinate information.
  9. 根据权利要求1所述的方法,其特征在于,所述确定所述目标人脸局部区域图像的灰度特征,包括:The method according to claim 1, wherein the determining the grayscale feature of the partial region image of the target face comprises:
    对所述目标人脸局部区域图像进行归一化处理,得到预设大小的图像;normalizing the image of the target face local area to obtain an image of a preset size;
    绘制所述预设大小的图像的灰度直方图;drawing a grayscale histogram of the image of the preset size;
    根据所述灰度直方图确定灰度向量,将所述灰度向量确定为所述目标人脸局部区域图像的灰度特征。A grayscale vector is determined according to the grayscale histogram, and the grayscale vector is determined as the grayscale feature of the target face local area image.
  10. 根据权利要求2所述的方法,其特征在于,所述根据所述灰度特征和预先训练的活体检测模型进行活体检测处理,得到所述待检测对象的人脸活体检测结果,包括:The method according to claim 2, wherein the performing a living body detection process according to the grayscale feature and a pre-trained living body detection model to obtain a face living body detection result of the object to be detected, comprising:
    将所述眼部区域图像的灰度特征输入至第一活体检测模型进行检测处理,得到第一检测结果;inputting the grayscale feature of the eye region image into the first living body detection model for detection processing to obtain a first detection result;
    将所述嘴部区域图像的灰度特征输入至第二活体检测模型进行检测处理,得到第二检测结果;inputting the grayscale feature of the mouth region image into the second living body detection model for detection processing to obtain a second detection result;
    若所述第一检测结果和所述第二检测结果均表征存在活体,则确定所述待检测对象的人脸活体检测结果为检测通过。If both the first detection result and the second detection result indicate the existence of a living body, it is determined that the detection result of the living body of the face of the object to be detected is the detection passed.
  11. 根据权利要求2所述的方法,其特征在于,所述根据所述灰度特征和预先训练的活体检测模型进行活体检测处理,得到所述待检测对象的人脸活体检测结果,包括:The method according to claim 2, wherein the performing a living body detection process according to the grayscale feature and a pre-trained living body detection model to obtain a face living body detection result of the object to be detected, comprising:
    对所述眼部区域图像的灰度特征和所述嘴部区域的灰度特征进行合并处理,得到合并灰度特征;Merging the grayscale feature of the eye region image and the grayscale feature of the mouth region to obtain a combined grayscale feature;
    将所述合并灰度特征输入至第三活体检测模型进行检测处理,得到所述待检测对象的人脸活体检测结果。The combined grayscale feature is input into a third living body detection model for detection processing to obtain a face living body detection result of the object to be detected.
  12. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    获取多个样本的红外图像;其中,所述样本包括正样本和负样本,所述正样本为活体对象,所述负样本为假体对象;acquiring infrared images of a plurality of samples; wherein the samples include positive samples and negative samples, the positive samples are living objects, and the negative samples are prosthetic objects;
    对每个样本的红外图像进行人脸局部区域的提取处理,得到待训练的人脸局部区域图像;Extracting the face local area on the infrared image of each sample to obtain the face local area image to be trained;
    确定所述待训练的人脸局部区域图像的灰度特征;Determine the grayscale feature of the face local area image to be trained;
    根据预设的训练方式,基于所述待训练的人脸局部区域图像的灰度特征进行训练处理,得到所述活体检测模型。According to a preset training method, a training process is performed based on the grayscale feature of the partial region image of the face to be trained to obtain the living body detection model.
  13. 一种人脸活体检测装置,其特征在于,包括:A face liveness detection device, comprising:
    图像传感器,用于获取待检测对象的红外图像;an image sensor for acquiring an infrared image of the object to be detected;
    处理器,用于对所述红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像;确定所述目标人脸局部区域图像的灰度特征;根据所述灰度特征和预先训练的活体检测模型进行检测处理,得到所述待检测对象的人脸活体检测结果。a processor, configured to perform extraction processing on the infrared image of the partial region of the face, to obtain an image of the partial region of the target face; determine the grayscale feature of the image of the partial region of the target face; according to the grayscale feature and pre-training The living body detection model is used for detection processing to obtain the face living body detection result of the object to be detected.
  14. 一种电子设备,其特征在于,包括:处理器、通信接口、存储器和通信总线;其中,处理器,通信接口,存储器通过总线完成相互间的通信;存储器,用于存放计算机程序;处理器,用于执行存储器上所存放的程序,实现上述权利要求1至12中任一项所述的方法的步骤。An electronic device, characterized in that it includes: a processor, a communication interface, a memory, and a communication bus; wherein, the processor, the communication interface, and the memory communicate with each other through the bus; the memory is used to store computer programs; the processor, It is used to execute the program stored in the memory to realize the steps of the method described in any one of the above claims 1 to 12.
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现上述权利要求1至12中任一项所述的方法的步骤。A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method of any one of the preceding claims 1 to 12 is implemented. step.
  16. 一种人脸活体检测系统,其特征在于,所述系统包括:数据处理器、支持采集红外图像的图像传感器,其中:A face living body detection system, characterized in that the system comprises: a data processor, an image sensor that supports collecting infrared images, wherein:
    所述图像传感器,用于采集待检测对象的红外图像,向所述数据处理器发送所采集的红外图像;The image sensor is used to collect the infrared image of the object to be detected, and send the collected infrared image to the data processor;
    所述数据处理器,用于接收所述图像传感器采集的红外图像,对所述红外图像进行人脸局部区域的提取处理,得到目标人脸局部区域图像;确定所述目标人脸局部区域图像的灰度特征;根据所述灰度特征和预先训练的活体检测模型进行活体检测处理,得到所述待检测对象的人脸活体检测结果。The data processor is configured to receive the infrared image collected by the image sensor, perform extraction processing of the partial area of the face on the infrared image, and obtain an image of the partial area of the target face; determine the image of the partial area of the target face. Grayscale feature; perform liveness detection processing according to the grayscale feature and the pre-trained liveness detection model, and obtain the face liveness detection result of the object to be detected.
PCT/CN2021/132720 2020-11-26 2021-11-24 Facial liveness detection method and apparatus, and device WO2022111512A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011349369.5 2020-11-26
CN202011349369.5A CN112329720A (en) 2020-11-26 2020-11-26 Face living body detection method, device and equipment

Publications (1)

Publication Number Publication Date
WO2022111512A1 true WO2022111512A1 (en) 2022-06-02

Family

ID=74308270

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/132720 WO2022111512A1 (en) 2020-11-26 2021-11-24 Facial liveness detection method and apparatus, and device

Country Status (2)

Country Link
CN (1) CN112329720A (en)
WO (1) WO2022111512A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883472A (en) * 2023-09-08 2023-10-13 山东德亿鑫信息科技有限公司 Face nursing system based on face three-dimensional image registration
CN117268559A (en) * 2023-10-25 2023-12-22 武汉星巡智能科技有限公司 Multi-mode infant abnormal body temperature detection method, device, equipment and medium
CN117268559B (en) * 2023-10-25 2024-05-07 武汉星巡智能科技有限公司 Multi-mode infant abnormal body temperature detection method, device, equipment and medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329720A (en) * 2020-11-26 2021-02-05 杭州海康威视数字技术股份有限公司 Face living body detection method, device and equipment
CN112883918B (en) * 2021-03-22 2024-03-19 深圳市百富智能新技术有限公司 Face detection method, face detection device, terminal equipment and computer readable storage medium
CN113421317B (en) * 2021-06-10 2023-04-18 浙江大华技术股份有限公司 Method and system for generating image and electronic equipment
CN113657154A (en) * 2021-07-08 2021-11-16 浙江大华技术股份有限公司 Living body detection method, living body detection device, electronic device, and storage medium
CN114399813B (en) * 2021-12-21 2023-09-26 马上消费金融股份有限公司 Face shielding detection method, model training method, device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764121A (en) * 2018-05-24 2018-11-06 释码融和(上海)信息科技有限公司 Method, computing device and readable storage medium storing program for executing for detecting live subject
EP3576016A1 (en) * 2018-04-12 2019-12-04 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Face recognition method and apparatus, and mobile terminal and storage medium
CN111222380A (en) * 2018-11-27 2020-06-02 杭州海康威视数字技术股份有限公司 Living body detection method and device and recognition model training method thereof
CN111582238A (en) * 2020-05-28 2020-08-25 上海依图网络科技有限公司 Living body detection method and device applied to face shielding scene
CN112329720A (en) * 2020-11-26 2021-02-05 杭州海康威视数字技术股份有限公司 Face living body detection method, device and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3576016A1 (en) * 2018-04-12 2019-12-04 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Face recognition method and apparatus, and mobile terminal and storage medium
CN108764121A (en) * 2018-05-24 2018-11-06 释码融和(上海)信息科技有限公司 Method, computing device and readable storage medium storing program for executing for detecting live subject
CN111222380A (en) * 2018-11-27 2020-06-02 杭州海康威视数字技术股份有限公司 Living body detection method and device and recognition model training method thereof
CN111582238A (en) * 2020-05-28 2020-08-25 上海依图网络科技有限公司 Living body detection method and device applied to face shielding scene
CN112329720A (en) * 2020-11-26 2021-02-05 杭州海康威视数字技术股份有限公司 Face living body detection method, device and equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883472A (en) * 2023-09-08 2023-10-13 山东德亿鑫信息科技有限公司 Face nursing system based on face three-dimensional image registration
CN116883472B (en) * 2023-09-08 2023-11-14 山东德亿鑫信息科技有限公司 Face nursing system based on face three-dimensional image registration
CN117268559A (en) * 2023-10-25 2023-12-22 武汉星巡智能科技有限公司 Multi-mode infant abnormal body temperature detection method, device, equipment and medium
CN117268559B (en) * 2023-10-25 2024-05-07 武汉星巡智能科技有限公司 Multi-mode infant abnormal body temperature detection method, device, equipment and medium

Also Published As

Publication number Publication date
CN112329720A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
WO2022111512A1 (en) Facial liveness detection method and apparatus, and device
EP3321850B1 (en) Method and apparatus with iris region extraction
TW202006602A (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
WO2019232866A1 (en) Human eye model training method, human eye recognition method, apparatus, device and medium
Radman et al. Automated segmentation of iris images acquired in an unconstrained environment using HOG-SVM and GrowCut
WO2019232862A1 (en) Mouth model training method and apparatus, mouth recognition method and apparatus, device, and medium
US9842247B2 (en) Eye location method and device
US9881204B2 (en) Method for determining authenticity of a three-dimensional object
TW202006595A (en) Face recognition method and terminal device
WO2016145940A1 (en) Face authentication method and device
Wang et al. Toward accurate localization and high recognition performance for noisy iris images
TW201807635A (en) User identity verification method, device and system
CN105279492B (en) The method and apparatus of iris recognition
CN106663157A (en) User authentication method, device for executing same, and recording medium for storing same
TW200910223A (en) Image processing apparatus and image processing method
TW201928741A (en) Biometric authentication, identification and detection method and device for mobile terminal and equipment
CN111222380B (en) Living body detection method and device and recognition model training method thereof
CN109389018B (en) Face angle recognition method, device and equipment
WO2023185234A1 (en) Image processing method and apparatus, and electronic device and storage medium
JPWO2013122009A1 (en) Reliability acquisition device, reliability acquisition method, and reliability acquisition program
CN110516661B (en) Beautiful pupil detection method and device applied to iris recognition
Ahmadi et al. Iris recognition system based on canny and LoG edge detection methods
US20170309040A1 (en) Method and device for positioning human eyes
WO2020244076A1 (en) Face recognition method and apparatus, and electronic device and storage medium
TWI425429B (en) Image texture extraction method, image identification method and image identification apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21897013

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21897013

Country of ref document: EP

Kind code of ref document: A1