WO2021042375A1 - Face spoofing detection method, chip, and electronic device - Google Patents

Face spoofing detection method, chip, and electronic device Download PDF

Info

Publication number
WO2021042375A1
WO2021042375A1 PCT/CN2019/104730 CN2019104730W WO2021042375A1 WO 2021042375 A1 WO2021042375 A1 WO 2021042375A1 CN 2019104730 W CN2019104730 W CN 2019104730W WO 2021042375 A1 WO2021042375 A1 WO 2021042375A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
face
human face
frames
group
Prior art date
Application number
PCT/CN2019/104730
Other languages
French (fr)
Chinese (zh)
Inventor
吕萌
潘雷雷
Original Assignee
深圳市汇顶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市汇顶科技股份有限公司 filed Critical 深圳市汇顶科技股份有限公司
Priority to PCT/CN2019/104730 priority Critical patent/WO2021042375A1/en
Priority to CN201980001922.5A priority patent/CN112997185A/en
Publication of WO2021042375A1 publication Critical patent/WO2021042375A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • This application relates to the field of image recognition technology, and in particular to a method, chip, and electronic device for detecting a living body of a human face.
  • Living body detection is an important function in face recognition related projects or products, mainly to distinguish whether the detected object is the face of a real person, or the face photo, mask, or 3D printed face model of the detected object.
  • Current face live detection usually uses live body detection based on captured face photos, and cooperates with live body detection through active response actions of the detected object.
  • the purpose of some embodiments of the present application is to provide a method, chip, and electronic device for detecting a human face, which can accurately and conveniently realize the detection of a human face.
  • An embodiment of the present application provides a method for detecting a living body of a human face, including: acquiring at least two frames of three-dimensional images of a human face; determining the depth change information of the human face surface according to each of the three-dimensional images; determining according to the depth change information Whether the human face is a real human face.
  • the embodiment of the present application also provides a chip, which is used to execute the above-mentioned method for detecting a living body of a human face.
  • An embodiment of the present application also provides an electronic device, including the above-mentioned chip.
  • the embodiments of the present application determine the depth change information of the human face surface according to the three-dimensional image of the human face, and determine whether it is a real human face according to the depth change information.
  • the depth information of the face surface recorded by the three-dimensional image of the face is not affected by light and posture; on the other hand, even if the detected object is in the case of expressionless movements, subtle changes in facial muscles and tissues will also cause The depth of the face surface changes, so there is no need for the subject to take active actions to cooperate with the detection; therefore, the technical solution of the present application can accurately and conveniently perform live face detection, and the user experience is better.
  • the solution of this embodiment is based on the acquired three-dimensional image to determine, compared with the prior art that requires comparison with a preset image template, the dependence on pre-stored information is lower.
  • the determining the depth change information of the human face surface according to each of the three-dimensional images includes: selecting at least one group of images from each of the three-dimensional images; wherein, each group of images includes two frames of the three-dimensional images. Image; according to the two frames of the three-dimensional images in each group of images, determine the depth change information corresponding to each group of images.
  • two frames of three-dimensional images are used as a group, and the depth change information corresponding to each group of images is determined.
  • the number of frames of the three-dimensional image is denoted as n, and n is an integer greater than or equal to 3; the selection of at least one group of images from each of the three-dimensional images is specifically: from n frames of the three-dimensional images Select any two of the three-dimensional images as a group of images, and the number of selected image groups is In this embodiment, it is determined according to n frames of three-dimensional images Group images. In this way, as many image groups as possible can be obtained in the case of currently acquired 3D images, so as to fully compare the depth of each frame of 3D images, thereby improving the accuracy of the detection results.
  • the depth change information corresponding to each group of images includes one or a combination of the following situations: feature values obtained according to the depth changes of the corresponding pixels on the two frames of the three-dimensional images in each group of images, two Frame the depth change trend of each pixel point corresponding to the three-dimensional image.
  • This embodiment provides specific types of depth change information.
  • the depth change information corresponding to each group of images further includes the depth change trends of the corresponding pixels on the two frames of the three-dimensional images; before the determining that the human face is a real human face, it further includes: judging Whether the depth change trend of a group of images matches the preset depth change trend that characterizes the face change caused by human exertion; if it does not exist, the face is determined to be a real face; if it exists, the face is determined It is a non-real face.
  • This embodiment provides a way to recognize the change of the human face caused by the force applied by others, which can avoid the misjudgment caused by the change of the depth of the surface of the unreal human face by the artificial force as much as possible.
  • the method further includes: aligning each of the three-dimensional images with a human face; and determining the depth change information of the human face surface according to each of the three-dimensional images is specifically : Determine the depth change information of the face surface according to each of the three-dimensional images after the face is aligned.
  • performing face alignment on the three-dimensional image can improve the accuracy of the determined depth change information of the face surface, thereby improving the accuracy of face living detection.
  • Fig. 1 is a flowchart of a method for detecting a human face in the first embodiment of the present application
  • FIG. 2 is a specific flow chart of the step of determining the depth change information of the human face surface according to each three-dimensional image according to the first embodiment of the present application;
  • FIG. 3 is a specific flowchart of the step of determining whether a human face is a real human face according to depth change information in the first embodiment of the present application;
  • Fig. 4 is a flowchart of a method for detecting a human face in a second embodiment of the present application
  • Fig. 5 is a flowchart of an example of a method for detecting a human face in the third embodiment of the present application.
  • Fig. 6 is a flowchart of another example of a method for detecting a human face in the third embodiment of the present application.
  • Fig. 7 is a block diagram of an electronic device in a fifth embodiment according to the present application.
  • the first embodiment of the present application relates to a face living detection method, which can be applied to any scene that requires the use of face recognition for identity verification, such as access control systems, payment systems, mobile phone unlocking systems, and so on.
  • FIG. 1 shows a flowchart of a method for detecting a human face according to a first embodiment of the application, and the details are as follows.
  • Step 101 Acquire at least two frames of three-dimensional images of a human face.
  • Step 102 Determine the depth change information of the human face surface according to each three-dimensional image.
  • Step 103 Determine whether the human face is a real human face according to the depth change information.
  • At least two frames of three-dimensional images can be acquired through related three-dimensional imaging equipment such as dual-camera, structured light, or TOF; among them, the specific form of the three-dimensional image is not limited in this embodiment, for example, it can be a point cloud image or a depth image. , Or grid graph.
  • related three-dimensional imaging equipment such as dual-camera, structured light, or TOF; among them, the specific form of the three-dimensional image is not limited in this embodiment, for example, it can be a point cloud image or a depth image. , Or grid graph.
  • the muscles and tissues of different parts of the human face often change at different moments; the changes in facial muscles and tissues are very obvious when a person has expressions such as joy, anger, sadness, etc., even if the person is in a quiet state, The muscles and tissues of different parts of the face also undergo subtle changes.
  • Each pixel in the three-dimensional image has a three-dimensional coordinate (x, y, z), where z represents the depth value of the pixel, even a slight change occurs in a certain place of the face, and that place in the multi-frame three-dimensional image
  • the depth value of the pixel points will also change accordingly; and if it is a non-real face photo, mask, face model, etc., once it is formed, the depth of the surface will not change; therefore, it can be based on the face Whether the depth of the surface changes to determine whether the face is a real face.
  • the real human face in this embodiment refers to a living human face.
  • step 102 includes the following sub-steps.
  • At least one group of images is selected from the three-dimensional images; wherein each group of images includes two frames of three-dimensional images.
  • the depth change information corresponding to each group of images is determined according to the two frames of three-dimensional images in each group of images.
  • the group of images selected in sub-step 1021 includes the two frames of images.
  • the number of frames n of the acquired three-dimensional image is greater than or equal to 3
  • any two three-dimensional images in each frame of the three-dimensional image can be regarded as a group
  • the number of selected image groups can be That is, if 2 frames of 3D images are selected as a combination (ie a group of images) from n frames of 3D images without repetition, the total number of combinations that can be obtained is For example, three frames of 3D images photo1, photo2, photo3 are obtained; when any two frames of 3D images are used as a group, the number of groups that can be divided into is That is, it can be divided into three groups, namely photo1 and photo2, photo2 and photo3, and photo1 and photo3.
  • two adjacent frames of three-dimensional images can also be used as a group.
  • they can be divided into two groups at this time. photo1 and photo2, photo2 and photo3; or it is also possible to select two frames from these three three-dimensional images as a group.
  • the depth change information corresponding to each group of images is calculated according to the two frames of three-dimensional images in each group of images.
  • the depth change information corresponding to each group of images includes the feature value obtained according to the depth changes of the corresponding pixels on the two frames of three-dimensional images in each group of images; the feature value may be, for example, each corresponding to each of the two frames of three-dimensional images.
  • the three-dimensional coordinates of each pixel in each frame of the three-dimensional image are represented by x, y, and z, where z represents the depth value of the pixel; the corresponding pixel in the two three-dimensional images refers to x, y, and z in the two three-dimensional images.
  • Two pixels with the same value of y; the depth change of corresponding pixels on two frames of three-dimensional images refers to the difference between the z values of two pixels with the same values of x and y in the two frames of images.
  • the pixel point P1 (x1, y1, z1 -1 ) on photo1 and the pixel point P1' (x1, y1, z1 -2 ) on photo2 are corresponding pixels, then the corresponding pixel points P1, P1'
  • the depth change between the time is z1 -1 -z1 -2 .
  • the depth variation values of the corresponding pixels in the two frames of three-dimensional images are calculated, and then the average value or minimum variance is calculated; however, it is not limited to this. In other examples, you can select in advance.
  • Multiple reference points are determined, for example, the pixel points of the nose area, glasses area, and mouth area are used as reference points, and the corresponding feature values of the depth changes of each reference point are calculated.
  • each group of images determined in sub-step 1021 corresponds to a feature value of depth change.
  • step 103 includes the following sub-steps.
  • Sub-step 1031 judging whether there is a group of image feature values greater than a preset threshold; if yes, go to sub-step 1032; if not, go to sub-step 1033.
  • sub-step 1032 it is determined that the human face is a real human face.
  • the preset threshold value can be obtained according to the detection of real human faces in advance, that is, the characteristic value of the depth change of the real human face in the quiet state is detected, and set according to the detected characteristic value, for example, The preset threshold is set to be slightly smaller than the detected characteristic value.
  • the feature value of the depth change corresponding to each set of images can be compared with the preset threshold in turn, if the feature value of the depth change corresponding to the group of images currently being compared Greater than the preset threshold, it means that there is at least one set of images whose feature values are greater than the preset threshold.
  • the face is determined to be a real face; if the feature values of the depth changes corresponding to each set of images are all less than or equal to the preset threshold, then Make sure that the face is not a real face.
  • step 103 can be implemented by, for example, a feature extraction algorithm, a machine learning algorithm, or a deep learning algorithm, which is not limited in this embodiment.
  • this embodiment determines the depth change information of the human face surface according to the three-dimensional image of the human face, and determines whether it is a real human face according to the depth change information.
  • the depth information of the face surface recorded by the three-dimensional image of the face is not affected by light and posture; on the other hand, even if the detected object is in the case of expressionless movements, subtle changes in facial muscles and tissues will also cause The depth of the face surface changes, so there is no need for the subject to take active actions to cooperate with the detection; therefore, the technical solution of the present application can accurately and conveniently perform live face detection, and the user experience is better.
  • the solution of this embodiment is based on the acquired three-dimensional image to determine, compared with the prior art that requires comparison with a preset image template, the dependence on pre-stored information is lower.
  • the second embodiment of the present application relates to a method for detecting a living body of a human face, and the specific process is shown in FIG. 4.
  • Step 201 Obtain at least two frames of three-dimensional images of a human face. This step is similar to step 101 in the first embodiment, and will not be repeated here.
  • Step 202 Determine the depth change information of the human face surface according to each three-dimensional image. This step is similar to step 102 in the first embodiment, and will not be repeated here.
  • Step 203 Determine whether the face is a real face according to the depth change information; Step 203 includes the following sub-steps:
  • Sub-step 2031 judging whether there is a group of image feature values greater than a preset threshold; if yes, go to sub-step 2032; if not, go to sub-step 2033. This step is similar to the sub-step 1031 in the first embodiment, and will not be repeated here.
  • sub-step 2032 it is judged whether the depth change trend of a group of images matches the preset depth change trend that characterizes the face change caused by artificial force; if yes, go to sub-step 2033; if not, go to sub-step 2034.
  • sub-step 2033 it is determined that the human face is an unreal human face. This step is similar to the sub-step 1033 in the first embodiment, and will not be repeated here.
  • sub-step 2034 it is determined that the human face is a real human face. This step is similar to the sub-step 1032 in the first embodiment, and will not be repeated here.
  • the unreal face in order to impersonate an unreal face as a real face, a malicious person may artificially force the unreal face to deform it, such as bending, squeezing, or poking the unreal face; among them, the unreal face includes, for example, Face photos, masks, face models, etc.
  • the depth change of the face surface caused by artificial force has obvious characteristics. For example, when poking a non-real face, the depth change produced by the face surface is spread around the center of the poking position, and the depth change at the center position is the largest. The farther from the center position, the smaller the depth change; In the case of a real human face, the depth change may take the form of wavy lines.
  • the depth change information corresponding to each group of images also includes the depth change of each pixel on the two three-dimensional images of each group of images.
  • Trend that is, the depth change information includes both the characteristic value of the depth change of each pixel on the two frames of three-dimensional images and the depth change trend of each pixel on the two frames of three-dimensional images.
  • the depth change information may also only include the depth change trend of each pixel point corresponding to the two frames of three-dimensional images.
  • Designers can simulate various man-made force actions on non-real faces in advance, and set the depth change trend that characterizes the changes in the face caused by man-made force based on the actual detected depth changes.
  • the artificial force action is poking, that is, when poking a non-real face
  • the corresponding depth change trend is to spread from the center to the surroundings, and the depth at the center changes the most. The farther from the center, the depth The smaller the change; when the artificial force action is squeezing, that is, when the unreal face is squeezed, the corresponding depth change trend is in a wave-shaped form.
  • the possible depth change trends will be different, such as poking face photos and poking face models, when the depth changes spread to the surrounding from the center position.
  • the diffusion range or the speed of the depth change in diffusion may be different, which can be determined according to the actual detection data; therefore, it can also be classified according to the detection data, that is, not only can identify non-real faces, but also may identify which ones are. Non-real face.
  • a method for identifying the change of the human face caused by the application of force, which can avoid the misjudgment caused by the change of the depth of the surface of the unreal human face due to the artificial force.
  • FIG. 5 is a flowchart of an example in this embodiment, and the details are as follows.
  • Step 301 Obtain at least two frames of three-dimensional images of a human face. This step is similar to step 101 in the first embodiment, and will not be repeated here.
  • Step 302 align the faces of the three-dimensional images.
  • Step 303 Determine the depth change information of the face surface according to the three-dimensional images after the face alignment. This step is similar to step 102 in the first embodiment, and will not be repeated here.
  • Step 304 Determine whether the human face is a real human face according to the depth change information. This step is similar to step 103 in the first embodiment, and will not be repeated here.
  • step 302 is newly added; that is, after obtaining the three-dimensional image of the human face, the three-dimensional image is first aligned with the human face.
  • the detected object may have a posture change, so the angle of the face presented by each three-dimensional image may be different.
  • the five senses of the face can be aligned to align the three-dimensional images.
  • the angle of the presented face is corrected to the same angle.
  • an iterative closest point algorithm can be used to align faces.
  • face alignment is performed on each three-dimensional image, and the depth change information of the face surface is determined according to each three-dimensional image after face alignment, which can improve the accuracy of the determined depth change information of the face surface, thereby improving Accuracy of live face detection.
  • Fig. 6 shows a flowchart of another example in this embodiment, which is specifically as follows.
  • Step 301 Obtain at least two frames of three-dimensional images of a human face.
  • Step 301-1 Perform image preprocessing on each three-dimensional image.
  • Step 302 Perform face alignment on each three-dimensional image after image preprocessing.
  • Step 303 Determine the depth change information of the face surface according to the three-dimensional images after the face alignment.
  • Step 304 Determine whether the human face is a real human face according to the depth change information.
  • this example adds step 301-1; that is, after acquiring the three-dimensional image of the human face, first perform image preprocessing on each three-dimensional image, and then perform image preprocessing on the preprocessed three-dimensional images.
  • the image is aligned with the face.
  • image preprocessing includes deburring, filling holes, smoothing filtering and so on.
  • performing image preprocessing on the three-dimensional image can improve the image quality, thereby improving the accuracy of face recognition.
  • the fourth embodiment of the present application relates to a chip, which is used to execute the above-mentioned face living detection method.
  • the fifth embodiment of the present application relates to an electronic device.
  • the electronic device includes the aforementioned chip 10; the electronic device may also include a three-dimensional imaging device 20 and a memory 30 connected to the chip.
  • the chip 10 obtains a three-dimensional image of a human face through a three-dimensional imaging device 20; the memory 30 is used to store instructions that can be executed by the chip, and when the instructions are executed by the chip, the chip can perform the above-mentioned method for detecting live human faces.
  • the memory 30 can also be used to store the acquired three-dimensional image and various data generated by the chip 10 executing the above-mentioned face living detection method.
  • the electronic device may be, for example, a face recognition device in an access control system, a payment system, or a mobile phone unlocking system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

A face spoofing detection method, a chip, and an electronic device. The face spoofing detection method comprises: obtaining at least two three-dimensional image frames of a human face (101); determining the depth change information of the surface of the human face according to each three-dimensional images (102); and determining, according to the depth change information, whether the human face is a real human face (103). The method can accurately and easily implement face spoofing detection.

Description

人脸活体检测方法、芯片及电子设备Human face live detection method, chip and electronic equipment 技术领域Technical field
本申请涉及图像识别技术领域,特别涉及一种人脸活体检测方法、芯片及电子设备。This application relates to the field of image recognition technology, and in particular to a method, chip, and electronic device for detecting a living body of a human face.
背景技术Background technique
活体检测是人脸识别相关工程或产品里的一项重要功能,主要为了区分被检测对象是真实人的人脸,还是被检测对象的人脸照片、面具或者三维打印的人脸模型。Living body detection is an important function in face recognition related projects or products, mainly to distinguish whether the detected object is the face of a real person, or the face photo, mask, or 3D printed face model of the detected object.
目前的人脸活体检测通常采用基于拍摄的人脸照片进行活体检测、通过被检测对象主动做出响应动作来配合进行活体检测等。Current face live detection usually uses live body detection based on captured face photos, and cooperates with live body detection through active response actions of the detected object.
发明内容Summary of the invention
本申请部分实施例的目的在于提供一种人脸活体检测方法、芯片及电子设备,可以准确、便捷地实现人脸活体检测。The purpose of some embodiments of the present application is to provide a method, chip, and electronic device for detecting a human face, which can accurately and conveniently realize the detection of a human face.
本申请实施例提供了一种人脸活体检测方法,包括:获取人脸的至少两帧三维图像;根据各所述三维图像确定所述人脸表面的深度变化信息;根据所述深度变化信息确定所述人脸是否为真实人脸。An embodiment of the present application provides a method for detecting a living body of a human face, including: acquiring at least two frames of three-dimensional images of a human face; determining the depth change information of the human face surface according to each of the three-dimensional images; determining according to the depth change information Whether the human face is a real human face.
本申请实施例还提供了一种芯片,用于执行上述人脸活体检测方法。The embodiment of the present application also provides a chip, which is used to execute the above-mentioned method for detecting a living body of a human face.
本申请实施例还提供了一种电子设备,包括上述芯片。An embodiment of the present application also provides an electronic device, including the above-mentioned chip.
本申请实施例现对于现有技术而言,根据人脸的三维图像确定人脸表面的深度变化信息,并根据深度变化信息判断是否为真实人脸。一方面,人脸的三维图像记录的人脸表面的深度信息不受光照和姿态影响;另一方面,即使被检测对象在无表情动作的情况下,人脸肌肉和组织的细微变化也会引起人脸表面的深度变化,故无需被检测者主动做出动作以配合检测;因此本申请的技术方案可以准确、便捷地进行人脸活体检测,用户体验较好。并且,本实施的方案中基于采集的三维图像来判断,相对于现有技术中需要结合预设的图像模板进行比对的判断方式而言,对预存信息的依赖性较低。For the prior art, the embodiments of the present application determine the depth change information of the human face surface according to the three-dimensional image of the human face, and determine whether it is a real human face according to the depth change information. On the one hand, the depth information of the face surface recorded by the three-dimensional image of the face is not affected by light and posture; on the other hand, even if the detected object is in the case of expressionless movements, subtle changes in facial muscles and tissues will also cause The depth of the face surface changes, so there is no need for the subject to take active actions to cooperate with the detection; therefore, the technical solution of the present application can accurately and conveniently perform live face detection, and the user experience is better. Moreover, the solution of this embodiment is based on the acquired three-dimensional image to determine, compared with the prior art that requires comparison with a preset image template, the dependence on pre-stored information is lower.
例如,所述根据各所述三维图像确定所述人脸表面的深度变化信息,包括:从各所述三维图像中选定至少一组图像;其中,每组图像包括两帧所述三维图像;根据所述每组图像中的两帧所述三维图像,确定所述每组图像对应的深度变化信息。本实施例中,以两帧三维图像为一组,确定出每组图像对应的深度变化信息。For example, the determining the depth change information of the human face surface according to each of the three-dimensional images includes: selecting at least one group of images from each of the three-dimensional images; wherein, each group of images includes two frames of the three-dimensional images. Image; according to the two frames of the three-dimensional images in each group of images, determine the depth change information corresponding to each group of images. In this embodiment, two frames of three-dimensional images are used as a group, and the depth change information corresponding to each group of images is determined.
例如,所述三维图像的帧数记作n,且n为大于或等于3的整数;所述从各所述三维图像中选定至少一组图像,具体为:从n帧所述三维图像中选定任意两幅所述三维图像作为一组图像,且选定的图像组数是
Figure PCTCN2019104730-appb-000001
本实施例中,根据n帧三维图像确定出
Figure PCTCN2019104730-appb-000002
组图像,采用这种方式可以在当前获取的三维图像的情况下得到尽可能多的图像组数,以充分地对各帧三维图像进行深度对比,从而可以提高检测结果的准确性。
For example, the number of frames of the three-dimensional image is denoted as n, and n is an integer greater than or equal to 3; the selection of at least one group of images from each of the three-dimensional images is specifically: from n frames of the three-dimensional images Select any two of the three-dimensional images as a group of images, and the number of selected image groups is
Figure PCTCN2019104730-appb-000001
In this embodiment, it is determined according to n frames of three-dimensional images
Figure PCTCN2019104730-appb-000002
Group images. In this way, as many image groups as possible can be obtained in the case of currently acquired 3D images, so as to fully compare the depth of each frame of 3D images, thereby improving the accuracy of the detection results.
例如,所述每组图像对应的深度变化信息包括以下情况的其中之一或组合:根据每组图像中两帧所述三维图像上对应的各像素点的深度变化得到的特 征值、两帧所述三维图像上对应的各像素点的深度变化趋势。本实施例提供了深度变化信息的具体类型。For example, the depth change information corresponding to each group of images includes one or a combination of the following situations: feature values obtained according to the depth changes of the corresponding pixels on the two frames of the three-dimensional images in each group of images, two Frame the depth change trend of each pixel point corresponding to the three-dimensional image. This embodiment provides specific types of depth change information.
例如,所述每组图像对应的深度变化信息包括根据每组图像中两帧所述三维图像上对应的各像素点的深度变化得到的特征值;所述根据所述深度变化信息确定所述人脸是否为真实人脸,包括:判断是否存在一组图像的特征值大于预设阈值;若存在,确定所述人脸为真实人脸;若不存在,确定所述人脸为非真实人脸。本实施例提供了根据深度变化信息判断是否为真实人脸的一种具体方式。For example, the depth change information corresponding to each group of images includes feature values obtained according to the depth changes of the corresponding pixels on the two frames of the three-dimensional images in each group of images; the determining the depth change information according to the depth change information Declaring whether a human face is a real human face includes: judging whether there is a set of image feature values greater than a preset threshold; if it exists, determining that the human face is a real human face; if it does not exist, determining that the human face is non-existent Real face. This embodiment provides a specific way of judging whether it is a real human face according to the depth change information.
例如,所述每组图像对应的深度变化信息还包括两帧所述三维图像上对应的各像素点的深度变化趋势;在所述确定所述人脸为真实人脸之前,还包括:判断是否存在一组图像的深度变化趋势与预设的表征人为施力导致人脸变化的深度变化趋势匹配;若不存在,确定所述人脸为真实人脸;若存在,确定所述人脸为非真实人脸。本实施例提供了一种辨别人为施力导致人脸变化的方式,可以尽可能避免因人为施力迫使非真实人脸表面的深度发生变化而导致的误判。For example, the depth change information corresponding to each group of images further includes the depth change trends of the corresponding pixels on the two frames of the three-dimensional images; before the determining that the human face is a real human face, it further includes: judging Whether the depth change trend of a group of images matches the preset depth change trend that characterizes the face change caused by human exertion; if it does not exist, the face is determined to be a real face; if it exists, the face is determined It is a non-real face. This embodiment provides a way to recognize the change of the human face caused by the force applied by others, which can avoid the misjudgment caused by the change of the depth of the surface of the unreal human face by the artificial force as much as possible.
例如,所述获取人脸的至少两帧三维图像之后,还包括:将各所述三维图像进行人脸对齐;所述根据各所述三维图像确定所述人脸表面的深度变化信息,具体为:根据人脸对齐后的各所述三维图像确定所述人脸表面的深度变化信息。本实施例中,对所述三维图像进行人脸对齐,可以提高确定的人脸表面的深度变化信息的准确度,从而可以提高人脸活体检测的准确度。For example, after acquiring at least two frames of three-dimensional images of a human face, the method further includes: aligning each of the three-dimensional images with a human face; and determining the depth change information of the human face surface according to each of the three-dimensional images is specifically : Determine the depth change information of the face surface according to each of the three-dimensional images after the face is aligned. In this embodiment, performing face alignment on the three-dimensional image can improve the accuracy of the determined depth change information of the face surface, thereby improving the accuracy of face living detection.
附图说明Description of the drawings
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些 示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。One or more embodiments are exemplified by the pictures in the corresponding drawings. These exemplified descriptions do not constitute a limitation on the embodiments. The elements with the same reference numerals in the drawings are denoted as similar elements. Unless otherwise stated, the figures in the attached drawings do not constitute a scale limitation.
图1是根据本申请第一实施例中的人脸活体检测方法的流程图;Fig. 1 is a flowchart of a method for detecting a human face in the first embodiment of the present application;
图2是根据本申请第一实施例中根据各三维图像确定人脸表面的深度变化信息这一步骤的具体流程图;2 is a specific flow chart of the step of determining the depth change information of the human face surface according to each three-dimensional image according to the first embodiment of the present application;
图3是根据本申请第一实施例中根据深度变化信息确定人脸是否为真实人脸这一步骤的具体流程图;FIG. 3 is a specific flowchart of the step of determining whether a human face is a real human face according to depth change information in the first embodiment of the present application;
图4是根据本申请第二实施例中的人脸活体检测方法的流程图;Fig. 4 is a flowchart of a method for detecting a human face in a second embodiment of the present application;
图5是根据本申请第三实施例中的人脸活体检测方法的一个例子的流程图;Fig. 5 is a flowchart of an example of a method for detecting a human face in the third embodiment of the present application;
图6是根据本申请第三实施例中的人脸活体检测方法的另一个例子的流程图;Fig. 6 is a flowchart of another example of a method for detecting a human face in the third embodiment of the present application;
图7是根据本申请第五实施例中的电子设备的方框图。Fig. 7 is a block diagram of an electronic device in a fifth embodiment according to the present application.
具体实施方式detailed description
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请部分实施例进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。以下各个实施例的划分是为了描述方便,不应对本发明的具体实现方式构成任何限定,各个实施例在不矛盾的前提下可以相互结合相互引用。In order to make the objectives, technical solutions, and advantages of the present application clearer, some embodiments of the present application will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the application, and not used to limit the application. The following division of the various embodiments is for convenience of description, and should not constitute any limitation on the specific implementation of the present invention, and the various embodiments may be combined with each other without contradiction.
发明人发现现有技术至少存在以下问题:人脸照片容易受到拍摄时环境光和被检测对象姿态的影响,容易导致误判;而被检测对象主动做出响应动作 则需要被检测对象配合完成,较为繁琐、用户体验不佳。基于此,发明人提出了本申请的技术方案。The inventor found that the prior art has at least the following problems: Face photos are easily affected by the ambient light and the posture of the detected object during shooting, which may easily lead to misjudgment; while the detected object actively responds to actions that require the cooperation of the detected object to complete. More cumbersome and poor user experience. Based on this, the inventor proposed the technical solution of this application.
本申请第一实施例涉及一种人脸活体检测方法,可以应用于任何需要使用人脸识别来进行身份验证的场景,如门禁系统、支付系统、手机解锁系统等。The first embodiment of the present application relates to a face living detection method, which can be applied to any scene that requires the use of face recognition for identity verification, such as access control systems, payment systems, mobile phone unlocking systems, and so on.
图1所示为本申请第一实施例的人脸活体检测方法的流程图,具体如下。FIG. 1 shows a flowchart of a method for detecting a human face according to a first embodiment of the application, and the details are as follows.
步骤101,获取人脸的至少两帧三维图像。Step 101: Acquire at least two frames of three-dimensional images of a human face.
步骤102,根据各三维图像确定人脸表面的深度变化信息。Step 102: Determine the depth change information of the human face surface according to each three-dimensional image.
步骤103,根据深度变化信息确定人脸是否为真实人脸。Step 103: Determine whether the human face is a real human face according to the depth change information.
在步骤101中,可以通过双摄、结构光或TOF等相关三维成像设备获取至少两帧三维图像;其中,本实施例对三维图像的具体形式不作任何限制,例如可以为点云图、或深度图、或网格图。In step 101, at least two frames of three-dimensional images can be acquired through related three-dimensional imaging equipment such as dual-camera, structured light, or TOF; among them, the specific form of the three-dimensional image is not limited in this embodiment, for example, it can be a point cloud image or a depth image. , Or grid graph.
其中,人脸在不同的时刻、人脸的不同部位的肌肉和组织常常会发生变化;人在有喜怒哀乐等表情时脸部肌肉和组织的变化是非常明显的,即使是人在安静状态,人脸的不同部位的肌肉和组织也会发生细微变化。三维图像中的每个像素点都有一个三维坐标(x、y、z),其中,z表示像素点的深度值,人脸某个地方发生即使是细微的变化,多帧三维图像中该地方的像素点的深度值也会相应的发生变化;而如果是人脸照片、面具、人脸模型等非真实人脸一旦制作成型,其表面的深度是不会发生变化的;因此可以基于人脸表面的深度是否变化来确定人脸是否为真实人脸。其中,本实施例中所述的真实人脸即是指活体人脸。Among them, the muscles and tissues of different parts of the human face often change at different moments; the changes in facial muscles and tissues are very obvious when a person has expressions such as joy, anger, sadness, etc., even if the person is in a quiet state, The muscles and tissues of different parts of the face also undergo subtle changes. Each pixel in the three-dimensional image has a three-dimensional coordinate (x, y, z), where z represents the depth value of the pixel, even a slight change occurs in a certain place of the face, and that place in the multi-frame three-dimensional image The depth value of the pixel points will also change accordingly; and if it is a non-real face photo, mask, face model, etc., once it is formed, the depth of the surface will not change; therefore, it can be based on the face Whether the depth of the surface changes to determine whether the face is a real face. Wherein, the real human face in this embodiment refers to a living human face.
在一个例子中,如图2中所示,步骤102包括如下子步骤。In an example, as shown in FIG. 2, step 102 includes the following sub-steps.
子步骤1021,从各三维图像中选定至少一组图像;其中,每组图像包括 两帧三维图像。In sub-step 1021, at least one group of images is selected from the three-dimensional images; wherein each group of images includes two frames of three-dimensional images.
子步骤1022,根据每组图像中的两帧三维图像,确定每组图像对应的深度变化信息。In sub-step 1022, the depth change information corresponding to each group of images is determined according to the two frames of three-dimensional images in each group of images.
具体的,当获取的三维图像的帧数为两帧时,子步骤1021中选定的一组图像即包括该两帧图像。当获取的三维图像的帧数n大于或等于3时,可以将各帧三维图像中的任意两帧三维图像作为一组,那么选定的图像组数可以是
Figure PCTCN2019104730-appb-000003
即,从n帧三维图像中不重复地选取2帧三维图像作为一个组合(即一组图像),可以得到的组合的总数为
Figure PCTCN2019104730-appb-000004
例如,获取了三帧三维图像photo1、photo2、photo3;将任意两帧三维图像作为一组时,可以分成的组数为
Figure PCTCN2019104730-appb-000005
即可以分成三组,分别是photo1和photo2、photo2和photo3、photo1和photo3。然本实施例对如何分组并不作限制,在其他例子中,也可以将相邻两帧三维图像作为一组,将相邻两帧三维图像作为一组时,此时可以分成两组,分别是photo1和photo2、photo2和photo3;或者还可以仅从这三帧三维图像中任选两帧作为一组。
Specifically, when the number of frames of the acquired three-dimensional image is two frames, the group of images selected in sub-step 1021 includes the two frames of images. When the number of frames n of the acquired three-dimensional image is greater than or equal to 3, any two three-dimensional images in each frame of the three-dimensional image can be regarded as a group, then the number of selected image groups can be
Figure PCTCN2019104730-appb-000003
That is, if 2 frames of 3D images are selected as a combination (ie a group of images) from n frames of 3D images without repetition, the total number of combinations that can be obtained is
Figure PCTCN2019104730-appb-000004
For example, three frames of 3D images photo1, photo2, photo3 are obtained; when any two frames of 3D images are used as a group, the number of groups that can be divided into is
Figure PCTCN2019104730-appb-000005
That is, it can be divided into three groups, namely photo1 and photo2, photo2 and photo3, and photo1 and photo3. However, this embodiment does not limit how to group. In other examples, two adjacent frames of three-dimensional images can also be used as a group. When two adjacent frames of three-dimensional images are used as a group, they can be divided into two groups at this time. photo1 and photo2, photo2 and photo3; or it is also possible to select two frames from these three three-dimensional images as a group.
本实施例中,根据n帧三维图像确定出
Figure PCTCN2019104730-appb-000006
组图像,采用这种方式可以在当前获取的三维图像的情况下得到尽可能多的图像组数,以充分地对各帧三维图像进行深度对比,从而可以提高检测结果的准确性。
In this embodiment, it is determined according to n frames of three-dimensional images
Figure PCTCN2019104730-appb-000006
Group images. In this way, as many image groups as possible can be obtained in the case of currently acquired 3D images, so as to fully compare the depth of each frame of 3D images, thereby improving the accuracy of the detection results.
当确定各组图像后,根据每组图像中的两帧三维图像计算各组图像对应的深度变化信息。其中,每组图像对应的深度变化信息包括根据每组图像中两帧三维图像上对应的各像素点的深度变化得到的特征值;该特征值例如可以为两帧三维图像上对应的各像素点的深度变化的平均值、或者两帧三维图像上对应的各像素点的深度变化的最小方差。具体的,每帧三维图像中的各像素点的三维坐标以x、y、z表示,其中,z表示像素点的深度值;两帧三维图像中对 应的像素点是指两帧图像中x、y的值相同的两个像素点;两帧三维图像上对应的像素点的深度变化是指两帧图像中x、y的值相同的两个像素点的z值的差值。When each group of images is determined, the depth change information corresponding to each group of images is calculated according to the two frames of three-dimensional images in each group of images. Wherein, the depth change information corresponding to each group of images includes the feature value obtained according to the depth changes of the corresponding pixels on the two frames of three-dimensional images in each group of images; the feature value may be, for example, each corresponding to each of the two frames of three-dimensional images. The average value of the depth changes of the pixels, or the minimum variance of the depth changes of the corresponding pixels on two frames of three-dimensional images. Specifically, the three-dimensional coordinates of each pixel in each frame of the three-dimensional image are represented by x, y, and z, where z represents the depth value of the pixel; the corresponding pixel in the two three-dimensional images refers to x, y, and z in the two three-dimensional images. Two pixels with the same value of y; the depth change of corresponding pixels on two frames of three-dimensional images refers to the difference between the z values of two pixels with the same values of x and y in the two frames of images.
例如,photo1上的像素点P1(x1、y1、z1 -1)和photo2上的像素点P1’(x1、y1、z1 -2)是对应的像素点,则对应的像素点P1、P1’之间的深度变化是z1 -1-z1 -2For example, the pixel point P1 (x1, y1, z1 -1 ) on photo1 and the pixel point P1' (x1, y1, z1 -2 ) on photo2 are corresponding pixels, then the corresponding pixel points P1, P1' The depth change between the time is z1 -1 -z1 -2 .
本实施例中,将两帧三维图像中对应的各像素点的深度变化的值均求出来,再计算其平均值或最小方差;然并不以此为限,在其他例子中,可以预先选定多个参考点,例如将鼻子区域、眼镜区域、嘴巴区域的像素点作为参考点,并计算对应的各参考点的深度变化的特征值。In this embodiment, the depth variation values of the corresponding pixels in the two frames of three-dimensional images are calculated, and then the average value or minimum variance is calculated; however, it is not limited to this. In other examples, you can select in advance. Multiple reference points are determined, for example, the pixel points of the nose area, glasses area, and mouth area are used as reference points, and the corresponding feature values of the depth changes of each reference point are calculated.
其中,子步骤1021中确定的每组图像均对应有一个深度变化的特征值。Wherein, each group of images determined in sub-step 1021 corresponds to a feature value of depth change.
在一个例子中,如图3中所示,步骤103包括如下子步骤。In an example, as shown in FIG. 3, step 103 includes the following sub-steps.
子步骤1031,判断是否存在一组图像的特征值大于预设阈值;若是,进入子步骤1032;若否,进入子步骤1033。Sub-step 1031, judging whether there is a group of image feature values greater than a preset threshold; if yes, go to sub-step 1032; if not, go to sub-step 1033.
子步骤1032,确定人脸为真实人脸。In sub-step 1032, it is determined that the human face is a real human face.
子步骤1033,确定人脸为非真实人脸。In sub-step 1033, it is determined that the human face is an unreal human face.
其中,预设阈值可以根据预先对真实人脸的检测得到,即检测真实人脸在安静状态下的人脸表面的深度变化的特征值,并根据检测到的该特征值设定,例如可以将预设阈值设定为略小于检测到的该特征值。Among them, the preset threshold value can be obtained according to the detection of real human faces in advance, that is, the characteristic value of the depth change of the real human face in the quiet state is detected, and set according to the detected characteristic value, for example, The preset threshold is set to be slightly smaller than the detected characteristic value.
如果存在多组图像,在子步骤1031中,可以依次将每组图像对应的深度变化的特征值与预设阈值进行比较,若当前进行比较的这组图像对应的深度变化的特征值大于预设阈值,表示至少存在一组图像的特征值大于预设阈值,此时确定人脸为真实人脸;如果各组图像对应的深度变化的特征值均小于或等于 预设阈值,则确定人脸非为真实人脸。If there are multiple sets of images, in sub-step 1031, the feature value of the depth change corresponding to each set of images can be compared with the preset threshold in turn, if the feature value of the depth change corresponding to the group of images currently being compared Greater than the preset threshold, it means that there is at least one set of images whose feature values are greater than the preset threshold. At this time, the face is determined to be a real face; if the feature values of the depth changes corresponding to each set of images are all less than or equal to the preset threshold, then Make sure that the face is not a real face.
其中,步骤103例如可以通过特征提取算法,机器学习算法或深度学习算法等算法来实现,本实施例对此不作任何限定。Among them, step 103 can be implemented by, for example, a feature extraction algorithm, a machine learning algorithm, or a deep learning algorithm, which is not limited in this embodiment.
本实施例相对于现有技术而言,根据人脸的三维图像确定人脸表面的深度变化信息,并根据深度变化信息判断是否为真实人脸。一方面,人脸的三维图像记录的人脸表面的深度信息不受光照和姿态影响;另一方面,即使被检测对象在无表情动作的情况下,人脸肌肉和组织的细微变化也会引起人脸表面的深度变化,故无需被检测者主动做出动作以配合检测;因此本申请的技术方案可以准确、便捷地进行人脸活体检测,用户体验较好。并且,本实施的方案中基于采集的三维图像来判断,相对于现有技术中需要结合预设的图像模板进行比对的判断方式而言,对预存信息的依赖性较低。Compared with the prior art, this embodiment determines the depth change information of the human face surface according to the three-dimensional image of the human face, and determines whether it is a real human face according to the depth change information. On the one hand, the depth information of the face surface recorded by the three-dimensional image of the face is not affected by light and posture; on the other hand, even if the detected object is in the case of expressionless movements, subtle changes in facial muscles and tissues will also cause The depth of the face surface changes, so there is no need for the subject to take active actions to cooperate with the detection; therefore, the technical solution of the present application can accurately and conveniently perform live face detection, and the user experience is better. Moreover, the solution of this embodiment is based on the acquired three-dimensional image to determine, compared with the prior art that requires comparison with a preset image template, the dependence on pre-stored information is lower.
本申请第二实施例涉及一种人脸活体检测方法,具体流程如图4所示。The second embodiment of the present application relates to a method for detecting a living body of a human face, and the specific process is shown in FIG. 4.
步骤201,获取人脸的至少两帧三维图像。此步骤与第一实施例中的步骤101类似,此处不再赘述。Step 201: Obtain at least two frames of three-dimensional images of a human face. This step is similar to step 101 in the first embodiment, and will not be repeated here.
步骤202,根据各三维图像确定人脸表面的深度变化信息。此步骤与第一实施例中的步骤102类似,此处不再赘述。Step 202: Determine the depth change information of the human face surface according to each three-dimensional image. This step is similar to step 102 in the first embodiment, and will not be repeated here.
步骤203,根据深度变化信息确定人脸是否为真实人脸;步骤203包含如下子步骤:Step 203: Determine whether the face is a real face according to the depth change information; Step 203 includes the following sub-steps:
子步骤2031,判断是否存在一组图像的特征值大于预设阈值;若是,进入子步骤2032;若否,进入子步骤2033。此步骤与第一实施例中的子步骤1031类似,此处不再赘述。Sub-step 2031, judging whether there is a group of image feature values greater than a preset threshold; if yes, go to sub-step 2032; if not, go to sub-step 2033. This step is similar to the sub-step 1031 in the first embodiment, and will not be repeated here.
子步骤2032,判断是否存在一组图像的深度变化趋势与预设的表征人为 施力导致人脸变化的深度变化趋势匹配;若是,进入子步骤2033;若否,进入子步骤2034。In sub-step 2032, it is judged whether the depth change trend of a group of images matches the preset depth change trend that characterizes the face change caused by artificial force; if yes, go to sub-step 2033; if not, go to sub-step 2034.
子步骤2033,确定人脸为非真实人脸。此步骤与第一实施例中的子步骤1033类似,此处不再赘述。In sub-step 2033, it is determined that the human face is an unreal human face. This step is similar to the sub-step 1033 in the first embodiment, and will not be repeated here.
子步骤2034,确定人脸为真实人脸。此步骤与第一实施例中的子步骤1032类似,此处不再赘述。In sub-step 2034, it is determined that the human face is a real human face. This step is similar to the sub-step 1032 in the first embodiment, and will not be repeated here.
其中,恶意者为了将非真实人脸冒充真实人脸,可能对非真实人脸进行人为施力以使其变形,譬如弯曲、挤压、戳非真实人脸;其中,非真实人脸例如包括人脸照片、面具、人脸模型等。但是,人为施力导致的人脸表面的深度变化具有明显的特征。譬如戳非真实人脸时,人脸表面产生的深度变化是以戳重位置为中心位置向四周扩散,且中心位置的深度变化最大,离中心位置越远,深度变化越小;譬如挤压非真实人脸时,深度变化可能呈波浪线形式。Among them, in order to impersonate an unreal face as a real face, a malicious person may artificially force the unreal face to deform it, such as bending, squeezing, or poking the unreal face; among them, the unreal face includes, for example, Face photos, masks, face models, etc. However, the depth change of the face surface caused by artificial force has obvious characteristics. For example, when poking a non-real face, the depth change produced by the face surface is spread around the center of the poking position, and the depth change at the center position is the largest. The farther from the center position, the smaller the depth change; In the case of a real human face, the depth change may take the form of wavy lines.
为了尽可能防止由于恶意者对非真实人脸进行人为施力而导致误判,每组图像对应的深度变化信息还包括每组图像中两帧三维图像上对应的各像素点的深度变化趋势;即,深度变化信息同时包含两帧三维图像上对应的各像素点的深度变化的特征值和两帧三维图像上对应的各像素点的深度变化趋势。需要说明的是,在其他例子中,深度变化信息也可以仅包含两帧三维图像上对应的各像素点的深度变化趋势。In order to prevent misjudgment due to the malicious person's artificial force on the unreal face as much as possible, the depth change information corresponding to each group of images also includes the depth change of each pixel on the two three-dimensional images of each group of images. Trend; that is, the depth change information includes both the characteristic value of the depth change of each pixel on the two frames of three-dimensional images and the depth change trend of each pixel on the two frames of three-dimensional images. It should be noted that in other examples, the depth change information may also only include the depth change trend of each pixel point corresponding to the two frames of three-dimensional images.
设计人员可以预先在非真实人脸上模拟各种人为施力动作,并根据实际检测到的深度变化来设定表征人为施力导致人脸变化的深度变化趋势。如上述例子中,当人为施力动作为戳时,即戳非真实人脸时,对应的深度变化趋势为以中心位置向四周扩散,且中心位置的深度变化最大,离中心位置越远,深度 变化越小;当人为施力动作为挤压时,即挤压非真实人脸时,对应的深度变化趋势为呈波浪形形式。另外,同一个人为施力动作施加于不同的非真实人脸,可能得到的深度变化趋势也会有所差别,例如戳人脸照片和戳人脸模型,深度变化以中心位置向四周扩散时的扩散范围或者扩散中深度变化的速度可能是不一样的,可以根据实际检测数据确定;因此,还可以根据检测数据进行分类,即不仅可以识别出非真实人脸,还可能可以识别出是哪种非真实人脸。Designers can simulate various man-made force actions on non-real faces in advance, and set the depth change trend that characterizes the changes in the face caused by man-made force based on the actual detected depth changes. As in the above example, when the artificial force action is poking, that is, when poking a non-real face, the corresponding depth change trend is to spread from the center to the surroundings, and the depth at the center changes the most. The farther from the center, the depth The smaller the change; when the artificial force action is squeezing, that is, when the unreal face is squeezed, the corresponding depth change trend is in a wave-shaped form. In addition, if the same person applies force to different non-real faces, the possible depth change trends will be different, such as poking face photos and poking face models, when the depth changes spread to the surrounding from the center position. The diffusion range or the speed of the depth change in diffusion may be different, which can be determined according to the actual detection data; therefore, it can also be classified according to the detection data, that is, not only can identify non-real faces, but also may identify which ones are. Non-real face.
本实施例中,提供了一种辨别人为施力导致人脸变化的方式,可以尽可能避免因人为施力迫使非真实人脸表面的深度发生变化而导致的误判。In this embodiment, a method is provided for identifying the change of the human face caused by the application of force, which can avoid the misjudgment caused by the change of the depth of the surface of the unreal human face due to the artificial force.
本申请第三实施例涉及一种人脸活体检测方法,如图5所示为本实施例中一个例子的流程图,具体如下。The third embodiment of the present application relates to a method for detecting live human faces. FIG. 5 is a flowchart of an example in this embodiment, and the details are as follows.
步骤301,获取人脸的至少两帧三维图像。此步骤与第一实施例中的步骤101类似,此处不再赘述。Step 301: Obtain at least two frames of three-dimensional images of a human face. This step is similar to step 101 in the first embodiment, and will not be repeated here.
步骤302,将各三维图像进行人脸对齐。Step 302: align the faces of the three-dimensional images.
步骤303,根据人脸对齐后的各三维图像确定人脸表面的深度变化信息。此步骤与第一实施例中的步骤102类似,此处不再赘述。Step 303: Determine the depth change information of the face surface according to the three-dimensional images after the face alignment. This step is similar to step 102 in the first embodiment, and will not be repeated here.
步骤304,根据深度变化信息确定人脸是否为真实人脸。此步骤与第一实施例中的步骤103类似,此处不再赘述。Step 304: Determine whether the human face is a real human face according to the depth change information. This step is similar to step 103 in the first embodiment, and will not be repeated here.
此例子中新增了步骤302;即,在获取人脸的三维图像后,先对三维图像进行人脸对齐。其中,在三维图像的获取过程中,被检测对象可能存在姿态变化,故各三维图像呈现的人脸角度可能存在不同,本实施例中可以将人脸的五官进行对齐,以将各三维图像中呈现的人脸角度纠正为相同角度。具体的,例如可以使用迭代最近点算法进行人脸对齐。这个例子中,对各三维图像进行 人脸对齐,并根据人脸对齐后的各三维图像确定人脸表面的深度变化信息,可以提高确定的人脸表面的深度变化信息的准确度,从而可以提高人脸活体检测的准确度。In this example, step 302 is newly added; that is, after obtaining the three-dimensional image of the human face, the three-dimensional image is first aligned with the human face. Among them, in the process of acquiring the three-dimensional image, the detected object may have a posture change, so the angle of the face presented by each three-dimensional image may be different. In this embodiment, the five senses of the face can be aligned to align the three-dimensional images. The angle of the presented face is corrected to the same angle. Specifically, for example, an iterative closest point algorithm can be used to align faces. In this example, face alignment is performed on each three-dimensional image, and the depth change information of the face surface is determined according to each three-dimensional image after face alignment, which can improve the accuracy of the determined depth change information of the face surface, thereby improving Accuracy of live face detection.
如图6所示为本实施例中另一个例子的流程图,具体如下。Fig. 6 shows a flowchart of another example in this embodiment, which is specifically as follows.
步骤301,获取人脸的至少两帧三维图像。Step 301: Obtain at least two frames of three-dimensional images of a human face.
步骤301-1,对各三维图像进行图像预处理。Step 301-1: Perform image preprocessing on each three-dimensional image.
步骤302,将图像预处理后的各三维图像进行人脸对齐。Step 302: Perform face alignment on each three-dimensional image after image preprocessing.
步骤303,根据人脸对齐后的各三维图像确定人脸表面的深度变化信息。Step 303: Determine the depth change information of the face surface according to the three-dimensional images after the face alignment.
步骤304,根据深度变化信息确定人脸是否为真实人脸。Step 304: Determine whether the human face is a real human face according to the depth change information.
相对于图5中的例子,此例子中新增了步骤301-1;即,在获取人脸的三维图像后,先对各三维图像进行图像预处理,然后再将图像预处理后的各三维图像进行人脸对齐。其中,图像预处理包括对图像进行去毛刺、填补空洞、平滑滤波等。这个例子中,对三维图像进行图像预处理,可以提高图像质量,从而可以提高人脸识别的准确度。Compared with the example in Figure 5, this example adds step 301-1; that is, after acquiring the three-dimensional image of the human face, first perform image preprocessing on each three-dimensional image, and then perform image preprocessing on the preprocessed three-dimensional images. The image is aligned with the face. Among them, image preprocessing includes deburring, filling holes, smoothing filtering and so on. In this example, performing image preprocessing on the three-dimensional image can improve the image quality, thereby improving the accuracy of face recognition.
本申请第四实施例涉及一种芯片,用于执行上述人脸活体检测方法。The fourth embodiment of the present application relates to a chip, which is used to execute the above-mentioned face living detection method.
本申请第五实施例涉及一种电子设备,如图7所示,电子设备包括上述芯片10;电子设备还可以包括连接于芯片的三维成像设备20和存储器30。芯片10通过三维成像设备20获取人脸的三维图像;存储器30用于存储可被芯片执行的指令,该指令被芯片执行时,芯片能过执行上述人脸活体检测方法。存储器30还可用于存储获取的三维图像以及芯片10执行上述人脸活体检测方法至生成的各种数据。电子设备例如可以为门禁系统、或支付系统、或手机解锁系统中的人脸识别设备。The fifth embodiment of the present application relates to an electronic device. As shown in FIG. 7, the electronic device includes the aforementioned chip 10; the electronic device may also include a three-dimensional imaging device 20 and a memory 30 connected to the chip. The chip 10 obtains a three-dimensional image of a human face through a three-dimensional imaging device 20; the memory 30 is used to store instructions that can be executed by the chip, and when the instructions are executed by the chip, the chip can perform the above-mentioned method for detecting live human faces. The memory 30 can also be used to store the acquired three-dimensional image and various data generated by the chip 10 executing the above-mentioned face living detection method. The electronic device may be, for example, a face recognition device in an access control system, a payment system, or a mobile phone unlocking system.
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。A person of ordinary skill in the art can understand that the above-mentioned embodiments are specific embodiments for realizing the present application, and in practical applications, various changes can be made to them in form and details without departing from the spirit and spirit of the present application. range.

Claims (12)

  1. 一种人脸活体检测方法,其特征在于,包括:A method for detecting live human face, which is characterized in that it comprises:
    获取人脸的至少两帧三维图像;Acquiring at least two frames of three-dimensional images of the human face;
    根据各所述三维图像确定所述人脸表面的深度变化信息;Determining the depth change information of the human face surface according to each of the three-dimensional images;
    根据所述深度变化信息确定所述人脸是否为真实人脸。Determine whether the human face is a real human face according to the depth change information.
  2. 如权利要求1所述的方法,其特征在于,所述根据各所述三维图像确定所述人脸表面的深度变化信息,包括:The method according to claim 1, wherein the determining the depth change information of the human face surface according to each of the three-dimensional images comprises:
    从各所述三维图像中选定至少一组图像;其中,每组图像包括两帧所述三维图像;Select at least one group of images from each of the three-dimensional images; wherein each group of images includes two frames of the three-dimensional images;
    根据所述每组图像中的两帧所述三维图像,确定所述每组图像对应的深度变化信息。Determine the depth change information corresponding to each group of images according to the two frames of the three-dimensional images in each group of images.
  3. 如权利要求2所述的方法,其特征在于,所述三维图像的帧数记作n,且n为大于或等于3的整数;所述从各所述三维图像中选定至少一组图像,具体为:从n帧所述三维图像中选定任意两幅所述三维图像作为一组图像,且选定的图像组数是
    Figure PCTCN2019104730-appb-100001
    The method of claim 2, wherein the number of frames of the three-dimensional image is denoted as n, and n is an integer greater than or equal to 3; and at least one group of images is selected from each of the three-dimensional images , Specifically: selecting any two of the three-dimensional images from n frames of the three-dimensional images as a group of images, and the number of selected image groups is
    Figure PCTCN2019104730-appb-100001
  4. 如权利要求2所述的方法,其特征在于,所述每组图像对应的深度变化信息包括以下情况的其中之一或组合:根据每组图像中两帧所述三维图像上对应的各像素点的深度变化得到的特征值、每组图像中两帧所述三维图像上对应的各像素点的深度变化趋势。The method according to claim 2, wherein the depth change information corresponding to each group of images includes one or a combination of the following situations: according to each group of images corresponding to each of the two frames of the three-dimensional image The feature value obtained from the depth change of the pixel, and the depth change trend of each pixel on the two frames of the three-dimensional images in each group of images.
  5. 如权利要求4所述的方法,其特征在于,所述每组图像对应的深度变化信息包括根据每组图像中两帧所述三维图像上对应的各像素点的深度变化得到的特征值;所述根据所述深度变化信息确定所述人脸是否为真实人脸,包括:The method according to claim 4, wherein the depth change information corresponding to each group of images comprises feature values obtained according to the depth changes of the corresponding pixels on the two frames of the three-dimensional images in each group of images The determining whether the human face is a real human face according to the depth change information includes:
    判断是否存在一组图像的特征值大于预设阈值;若存在,确定所述人脸为真实人脸;若不存在,确定所述人脸为非真实人脸。It is judged whether there is a group of image feature values greater than a preset threshold; if it exists, it is determined that the human face is a real human face; if it does not exist, the human face is determined to be an unreal human face.
  6. 如权利要求5所述的方法,其特征在于,所述每组图像对应的深度变化信息还包括每组图像中两帧所述三维图像上对应的各像素点的深度变化趋势;在所述确定所述人脸为真实人脸之前,还包括:The method according to claim 5, wherein the depth change information corresponding to each group of images further comprises the depth change trend of each pixel point corresponding to the two frames of the three-dimensional image in each group of images; Before determining that the human face is a real human face, it also includes:
    判断是否存在一组图像的深度变化趋势与预设的表征人为施力导致人脸变化的深度变化趋势匹配;若不存在,确定所述人脸为真实人脸;若存在,确定所述人脸为非真实人脸。Determine whether the depth change trend of a set of images matches the preset depth change trend that characterizes the face change caused by human exertion; if it does not exist, determine that the face is a real face; if it exists, determine the person The face is not a real human face.
  7. 如权利要求4所述的方法,其特征在于,所述特征值包括以下情况的其中之一或组合:两帧所述三维图像上对应的各像素点的深度变化的平均值、两帧所述三维图像上对应的各像素点的深度变化的最小方差。The method according to claim 4, wherein the characteristic value includes one or a combination of the following situations: the average value of the depth changes of the corresponding pixels on the two frames of the three-dimensional image, and the two frames of the The minimum variance of the depth change of each pixel on the three-dimensional image.
  8. 如权利要求1所述的方法,其特征在于,所述获取人脸的至少两帧三维图像之后,还包括:将各所述三维图像进行人脸对齐;The method according to claim 1, wherein after said obtaining at least two frames of three-dimensional images of the human face, the method further comprises: aligning each of the three-dimensional images with the human face;
    所述根据各所述三维图像确定所述人脸表面的深度变化信息,具体为:根据人脸对齐后的各所述三维图像确定所述人脸表面的深度变化信息。The determining the depth change information of the face surface according to each of the three-dimensional images is specifically: determining the depth change information of the face surface according to each of the three-dimensional images after the face is aligned.
  9. 如权利要求8所述的方法,其特征在于,所述获取人脸的至少两帧三维图像之后,还包括:对各所述三维图像进行图像预处理;The method according to claim 8, wherein after said acquiring at least two frames of three-dimensional images of the human face, the method further comprises: performing image preprocessing on each of the three-dimensional images;
    所述将各所述三维图像进行人脸对齐,具体为,将图像预处理后的各所述三维图像进行人脸对齐。The performing face alignment on each of the three-dimensional images specifically includes performing face alignment on each of the three-dimensional images after image preprocessing.
  10. 如权利要求1所述的方法,其特征在于,所述三维图像为点云图、或深度图、或网格图。The method of claim 1, wherein the three-dimensional image is a point cloud image, or a depth image, or a grid image.
  11. 一种芯片,其特征在于,用于执行权利要求1至10中任一项所述的人脸活体检测方法。A chip, which is characterized in that it is used to implement the method for detecting a living body of a human face according to any one of claims 1 to 10.
  12. 一种电子设备,其特征在于,包括权利要求11所述的芯片。An electronic device, characterized by comprising the chip according to claim 11.
PCT/CN2019/104730 2019-09-06 2019-09-06 Face spoofing detection method, chip, and electronic device WO2021042375A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/104730 WO2021042375A1 (en) 2019-09-06 2019-09-06 Face spoofing detection method, chip, and electronic device
CN201980001922.5A CN112997185A (en) 2019-09-06 2019-09-06 Face living body detection method, chip and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/104730 WO2021042375A1 (en) 2019-09-06 2019-09-06 Face spoofing detection method, chip, and electronic device

Publications (1)

Publication Number Publication Date
WO2021042375A1 true WO2021042375A1 (en) 2021-03-11

Family

ID=74852955

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/104730 WO2021042375A1 (en) 2019-09-06 2019-09-06 Face spoofing detection method, chip, and electronic device

Country Status (2)

Country Link
CN (1) CN112997185A (en)
WO (1) WO2021042375A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679118A (en) * 2012-09-07 2014-03-26 汉王科技股份有限公司 Human face in-vivo detection method and system
CN105574518A (en) * 2016-01-25 2016-05-11 北京天诚盛业科技有限公司 Method and device for human face living detection
US20160277397A1 (en) * 2015-03-16 2016-09-22 Ricoh Company, Ltd. Information processing apparatus, information processing method, and information processing system
CN106598226A (en) * 2016-11-16 2017-04-26 天津大学 UAV (Unmanned Aerial Vehicle) man-machine interaction method based on binocular vision and deep learning
CN107368769A (en) * 2016-05-11 2017-11-21 北京市商汤科技开发有限公司 Human face in-vivo detection method, device and electronic equipment
CN108124486A (en) * 2017-12-28 2018-06-05 深圳前海达闼云端智能科技有限公司 Face living body detection method based on cloud, electronic device and program product
CN108875509A (en) * 2017-11-23 2018-11-23 北京旷视科技有限公司 Biopsy method, device and system and storage medium
CN109492585A (en) * 2018-11-09 2019-03-19 联想(北京)有限公司 A kind of biopsy method and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740781B (en) * 2016-01-25 2020-05-19 北京眼神智能科技有限公司 Three-dimensional human face living body detection method and device
CN108616688A (en) * 2018-04-12 2018-10-02 Oppo广东移动通信有限公司 Image processing method, device and mobile terminal, storage medium
CN109508706B (en) * 2019-01-04 2020-05-05 江苏正赫通信息科技有限公司 Silence living body detection method based on micro-expression recognition and non-sensory face recognition
CN110199296A (en) * 2019-04-25 2019-09-03 深圳市汇顶科技股份有限公司 Face identification method, processing chip and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679118A (en) * 2012-09-07 2014-03-26 汉王科技股份有限公司 Human face in-vivo detection method and system
US20160277397A1 (en) * 2015-03-16 2016-09-22 Ricoh Company, Ltd. Information processing apparatus, information processing method, and information processing system
CN105574518A (en) * 2016-01-25 2016-05-11 北京天诚盛业科技有限公司 Method and device for human face living detection
CN107368769A (en) * 2016-05-11 2017-11-21 北京市商汤科技开发有限公司 Human face in-vivo detection method, device and electronic equipment
CN106598226A (en) * 2016-11-16 2017-04-26 天津大学 UAV (Unmanned Aerial Vehicle) man-machine interaction method based on binocular vision and deep learning
CN108875509A (en) * 2017-11-23 2018-11-23 北京旷视科技有限公司 Biopsy method, device and system and storage medium
CN108124486A (en) * 2017-12-28 2018-06-05 深圳前海达闼云端智能科技有限公司 Face living body detection method based on cloud, electronic device and program product
CN109492585A (en) * 2018-11-09 2019-03-19 联想(北京)有限公司 A kind of biopsy method and electronic equipment

Also Published As

Publication number Publication date
CN112997185A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
KR102319177B1 (en) Method and apparatus, equipment, and storage medium for determining object pose in an image
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
WO2020000908A1 (en) Method and device for face liveness detection
CN109952594B (en) Image processing method, device, terminal and storage medium
CN106372629B (en) Living body detection method and device
EP1677250B9 (en) Image collation system and image collation method
JP2019504386A (en) Facial image processing method and apparatus, and storage medium
CN105740778B (en) Improved three-dimensional human face in-vivo detection method and device
CN111639522B (en) Living body detection method, living body detection device, computer equipment and storage medium
CN111091075B (en) Face recognition method and device, electronic equipment and storage medium
CN105243371A (en) Human face beauty degree detection method and system and shooting terminal
KR101759188B1 (en) the automatic 3D modeliing method using 2D facial image
CN109937434B (en) Image processing method, device, terminal and storage medium
WO2020215283A1 (en) Facial recognition method, processing chip and electronic device
US20220172331A1 (en) Image inpainting with geometric and photometric transformations
US9613404B2 (en) Image processing method, image processing apparatus and electronic device
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN110287862B (en) Anti-candid detection method based on deep learning
CN111222433B (en) Automatic face auditing method, system, equipment and readable storage medium
JP2017123087A (en) Program, device and method for calculating normal vector of planar object reflected in continuous photographic images
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
JPWO2008041518A1 (en) Image processing apparatus, image processing apparatus control method, and image processing apparatus control program
JP7268725B2 (en) Image processing device, image processing method, and image processing program
CN113128428B (en) Depth map prediction-based in vivo detection method and related equipment
CN111182207B (en) Image shooting method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19944591

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19944591

Country of ref document: EP

Kind code of ref document: A1