CN109766785B - Living body detection method and device for human face - Google Patents

Living body detection method and device for human face Download PDF

Info

Publication number
CN109766785B
CN109766785B CN201811572285.0A CN201811572285A CN109766785B CN 109766785 B CN109766785 B CN 109766785B CN 201811572285 A CN201811572285 A CN 201811572285A CN 109766785 B CN109766785 B CN 109766785B
Authority
CN
China
Prior art keywords
face
detected
determining
change degree
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811572285.0A
Other languages
Chinese (zh)
Other versions
CN109766785A (en
Inventor
侯晓楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Priority to CN201811572285.0A priority Critical patent/CN109766785B/en
Publication of CN109766785A publication Critical patent/CN109766785A/en
Application granted granted Critical
Publication of CN109766785B publication Critical patent/CN109766785B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application discloses a living body detection method and device for a human face. The method comprises the following steps: the method comprises the steps of obtaining feature vectors corresponding to faces to be detected at different moments and position information corresponding to preset key points at different moments, determining the face change degree of the faces to be detected according to the feature vectors corresponding to the different moments and the position information corresponding to the different moments, and determining that the faces to be detected pass through living body detection after determining that the face change degree is larger than a preset threshold value. Thus, whether the face to be detected is a living body can be determined by judging whether the position of the preset key point in the face to be detected changes at different moments. By adopting the method, the fake face model is static, so that the living body detection method provided by the embodiment of the application can effectively identify the fake face model, thereby improving the safety of face recognition and further improving the reliability of a face recognition system.

Description

Living body detection method and device for human face
Technical Field
The application relates to the technical field of face recognition, in particular to a living body detection method and device for a face.
Background
At present, biometric identification technology is widely applied to the field of security, and is one of the main means for authenticating the identity of a user. Biometric identification technology, particularly face recognition technology, has been widely used in various fields, such as financial payment field, entrance guard security field, etc. In view of the advantages of convenience, easiness in use, user friendliness, non-contact and the like of the face recognition technology, the development of the face recognition technology has been rapidly advanced in recent years.
However, the conventional face recognition technology generally processes only the image captured by the camera, and does not consider whether the captured image is a real person, so that a fake face model such as a photo face, a mask face and the like can be detected by a face recognition system, and further, the security of face recognition is easily affected.
Based on this, a living body detection method of a human face is needed at present, which is used for solving the problem that the human face recognition technology in the prior art cannot recognize a fake human face model, so that the safety of human face recognition is affected.
Disclosure of Invention
The embodiment of the application provides a living body detection method and device for a human face, which are used for solving the technical problem that the human face recognition technology in the prior art cannot recognize a fake human face model, so that the safety of human face recognition is affected.
The embodiment of the application provides a living body detection method of a human face, which comprises the following steps:
acquiring feature vectors corresponding to faces to be detected at different moments;
acquiring position information corresponding to preset key points in the face to be detected at different moments, wherein the position information is the position of the preset key points in the face to be detected; the preset key points are areas capable of representing facial expressions;
determining the face change degree of the face to be detected according to the feature vectors corresponding to the different moments and the position information corresponding to the different moments;
and if the face change degree of the face to be detected is larger than a preset threshold value, determining that the face to be detected passes through living body detection.
Thus, whether the face to be detected is a living body can be determined by judging whether the position of the preset key point in the face to be detected changes at different moments. By adopting the method, the fake face model is static, so that the living body detection method provided by the embodiment of the application can effectively identify the fake face model, thereby improving the safety of face recognition and further improving the reliability of a face recognition system.
In one possible implementation manner, determining the face change degree of the face to be detected according to the feature vectors corresponding to the different moments and the position information corresponding to the different moments includes:
determining feature similarity according to the feature vectors corresponding to the different moments;
determining the position change degree according to the position information corresponding to the different moments;
and determining the face change degree of the face to be detected according to the feature similarity and the position change degree.
In one possible implementation manner, obtaining feature vectors corresponding to faces to be detected at different moments includes:
acquiring feature vectors corresponding to each segmented region of the face to be detected at different moments; each segmentation area is determined according to the facial feature position of the face;
according to the feature vectors corresponding to the different moments, determining the feature similarity comprises the following steps:
determining the feature similarity of each divided region according to the feature vectors corresponding to each divided region at different moments;
according to the feature similarity and the position change degree, determining the face change degree of the face to be detected comprises the following steps:
and determining the face change degree of the face to be detected according to the feature similarity of each segmented region and the position change degree.
By segmenting the face to be detected, the expression sensitivity of each segmented region can be comprehensively considered, so that the accuracy of living body detection is improved.
In a possible implementation manner, determining the face change degree of the face to be detected according to the feature similarity of each segmented region and the position change degree includes:
determining a segmentation area to which any preset key point belongs according to any preset key point;
determining the face change degree of the segmented region according to the feature similarity of the segmented region and the position change degree of the preset key point;
and determining the face change degree of the face to be detected according to the face change degree of each divided area.
In one possible implementation, the segmentation area includes a mouth area, a nose area, a cheek area, an eyebrow area, an eye area, and a forehead area.
In a possible implementation manner, obtaining position information corresponding to preset key points in the face to be detected at the different moments includes:
acquiring position information corresponding to preset key points in the face to be detected at different moments by adopting a time-of-flight TOF technology;
or (b)
And acquiring position information corresponding to the preset key points in the face to be detected at different moments by adopting a 3D face reconstruction technology.
The related data of the face is obtained by adopting the TOF technology, the related data of the face of the user can be obtained under the condition that the user does not feel, the matching degree requirement on the user is low, and the user experience is better.
In one possible implementation manner, after determining that the face to be detected passes through in-vivo detection, the method further includes:
determining a feature vector corresponding to the face to be detected according to the first feature vector and the second feature vector;
and determining that the face to be detected passes identity authentication if the similar face of the face to be detected exists in the at least one detected face according to the feature vector corresponding to the face to be detected and the feature vector corresponding to the at least one detected face stored in advance.
The embodiment of the application provides a living body detection device of a human face, which comprises:
the acquisition unit is used for acquiring the feature vectors corresponding to the face to be detected at different moments; acquiring position information corresponding to preset key points in the face to be detected at different moments, wherein the position information is the position of the preset key points in the face to be detected; the preset key points are areas capable of representing facial expressions;
the processing unit is used for determining the face change degree of the face to be detected according to the feature vectors corresponding to the different moments and the position information corresponding to the different moments; and if the face change degree of the face to be detected is larger than a preset threshold value, determining that the face to be detected passes through living body detection.
In a possible implementation manner, the processing unit is specifically configured to:
determining feature similarity according to the feature vectors corresponding to the different moments; determining the position change degree according to the position information corresponding to the different moments; and determining the face change degree of the face to be detected according to the feature similarity and the position change degree.
In a possible implementation manner, the acquiring unit is specifically configured to:
acquiring feature vectors corresponding to each segmented region of the face to be detected at different moments; each segmentation area is determined according to the facial feature position of the face;
the processing unit is specifically configured to:
determining the feature similarity of each divided region according to the feature vectors corresponding to each divided region at different moments;
and determining the face change degree of the face to be detected according to the feature similarity of each segmented region and the position change degree.
In a possible implementation manner, the processing unit is specifically configured to:
determining a segmentation area to which any preset key point belongs according to any preset key point; determining the face change degree of the segmented region according to the feature similarity of the segmented region and the position change degree of the preset key point; and determining the face change degree of the face to be detected according to the face change degree of each divided area.
In one possible implementation, the segmentation area includes a mouth area, a nose area, a cheek area, an eyebrow area, an eye area, and a forehead area.
In a possible implementation manner, the acquiring unit is specifically configured to:
acquiring position information corresponding to preset key points in the face to be detected at different moments by adopting a time-of-flight TOF technology;
or (b)
And acquiring position information corresponding to the preset key points in the face to be detected at different moments by adopting a 3D face reconstruction technology.
In a possible implementation manner, after determining that the face to be detected passes through in-vivo detection, the processing unit is further configured to:
determining a feature vector corresponding to the face to be detected according to the first feature vector and the second feature vector; and determining that the face to be detected passes identity authentication if the similar face of the face to be detected exists in the at least one detected face according to the feature vector corresponding to the face to be detected and the feature vector corresponding to the at least one detected face stored in advance.
The embodiment of the application also provides a device which has the function of realizing the living body detection method of the human face. The functions may be implemented by hardware executing corresponding software, and in one possible design, the apparatus comprises: a processor, transceiver, memory; the memory is used for storing computer-executable instructions, the transceiver is used for realizing the communication between the device and other communication entities, the processor is connected with the memory through the bus, and when the device runs, the processor executes the computer-executable instructions stored in the memory so as to enable the device to execute the living body detection method of the human face.
Embodiments of the present application also provide a computer storage medium having stored therein a software program which, when read and executed by one or more processors, implements the method of in-vivo detection of a human face described in the various possible implementations described above.
Embodiments of the present application also provide a computer program product containing instructions that, when run on a computer, cause the computer to perform the method of in-vivo detection of a human face described in the various possible implementations described above.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that are required to be used in the description of the embodiments will be briefly described below.
Fig. 1 is a schematic flow diagram corresponding to a method for detecting a human face in vivo according to an embodiment of the present application;
fig. 2 is a schematic diagram of a segmented region of a face according to an embodiment of the present application;
FIG. 3a is a schematic diagram of a preset key point corresponding to an eye;
fig. 3b is a schematic diagram of a preset key point corresponding to the mouth;
FIG. 4 is a schematic diagram of the relationship between preset key points and the segmented regions;
FIG. 5 is a schematic flow chart of the integrity of the living body detection of the face according to the embodiment of the application;
FIG. 6 is a schematic diagram of an authentication process using a biopsy technique according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a living body detection device for a human face according to an embodiment of the present application.
Detailed Description
The application will be described in detail below with reference to the drawings, and the specific operation method in the method embodiment can also be applied to the device embodiment.
Fig. 1 schematically illustrates a flow chart corresponding to a method for detecting a living body of a face, which is provided by an embodiment of the present application, as shown in fig. 1, and includes the following steps:
step 101, obtaining feature vectors corresponding to faces to be detected at different moments.
Step 102, obtaining position information corresponding to preset key points in the face to be detected at different moments.
Step 103, determining the face change degree of the face to be detected according to the feature vectors corresponding to different moments and the position information corresponding to different moments;
step 104, if the face change degree of the face to be detected is greater than the preset threshold, determining that the face to be detected passes through living body detection.
Thus, whether the face to be detected is a living body can be determined by judging whether the position of the preset key point in the face to be detected changes at different moments. By adopting the method, the fake face model is static, so that the living body detection method provided by the embodiment of the application can effectively identify the fake face model, thereby improving the safety of face recognition and further improving the reliability of a face recognition system.
Specifically, in step 101 and step 102, the different times may refer to two different times, or may refer to three different times, or may refer to N different times (N is an integer greater than 1). For convenience of description, the following description will take two different moments as examples, that is, in step 101, feature vectors corresponding to a face to be detected at a first moment and a second moment are obtained respectively; in step 102, position information corresponding to a preset key point in a face to be detected at a first time and a second time is obtained, wherein the first time and the second time are two different times.
Based on the explanation of different moments, in step 101, features corresponding to face data of a face to be detected at any moment can be extracted through a preset neural network model, and feature vectors corresponding to the face to be authenticated at any moment can be obtained according to the extracted features.
Further, the preset neural network model may be a plurality of types of neural network models, for example, may be a 2D deep neural network model, or may also be a 3D deep neural network model, which is not limited specifically.
Considering that the sensitivity degree of different areas of the face to be detected to the facial expression is different, for example, the sensitivity degree of the areas such as eyes, mouth and the like is relatively higher, and the sensitivity degree corresponding to the areas such as cheeks, forehead and the like is relatively lower, the face to be detected can be divided into a plurality of divided areas, and the accuracy of living body detection is improved. Wherein, each divided area can be determined according to the facial position of the human face. Fig. 2 is a schematic diagram of a face segmentation area according to an embodiment of the present application. As shown in fig. 2, the face may be divided into a plurality of divided regions, for example, the divided regions may be a mouth region, a nose region, a cheek region, an eyebrow region, an eye region, or a forehead region, which is not particularly limited.
Based on the illustration of the segmented regions of the face shown in fig. 2, in the embodiment of the present application, feature vectors corresponding to each segmented region of the face to be detected at any time may also be obtained. Specifically, features corresponding to face data of a certain segmentation area of a face to be detected at any moment can be extracted through a preset neural network model, and feature vectors corresponding to the certain segmentation area of the face to be authenticated at any moment can be obtained according to the extracted features.
In step 102, the preset key points may refer to an area capable of representing a facial expression, for example, when a person smiles, the eyes may be bent, and then the eyes may correspond to a plurality of preset key points, as shown in fig. 3a, which is an example of the preset key points corresponding to the eyes, and the corners of the eyes, the tails of the eyes, the central points of the upper eyelid, the central points of the lower eyelid, and the center of the eyeballs may be used as the preset key points corresponding to the eyes; for another example, when a person cries, the mouth may be closed, and then the mouth may correspond to a plurality of preset key points, as shown in fig. 3b, which is an example of the preset key points corresponding to the mouth, and the mouth corner, the center point of the upper lip, the center point of the lower lip, and the incisors may be used as the preset key points corresponding to the mouth.
In the embodiment Of the present application, there are various ways Of obtaining position information corresponding to a preset key point in a face to be detected at any Time, and one possible implementation manner is that a Time Of Flight (TOF) technology may be adopted to obtain position information corresponding to a preset key point in a face to be detected at any Time. In particular, TOF technology can obtain target distance by continuously sending light pulses to a target, then receiving light back from the object with a sensor, and detecting the flight (round trip) times of these emitted and received light pulses. In the embodiment of the application, the TOF technology can be applied to the camera so as to acquire the position information (such as coordinate data) corresponding to the preset key points in the face to be detected at any time. The related data of the face is obtained by adopting the TOF technology, the related data of the face of the user can be obtained under the condition that the user does not feel, the matching degree requirement on the user is low, and the user experience is better.
In another possible implementation transmission, a face reconstruction technology may be adopted to obtain position information corresponding to a preset key point in a face to be detected at any time. Specifically, for the collected face image (such as the face image of each frame in the monitoring video), a cascade regression (Cascaded Regression, CR) method can be adopted to generate a strong regression composed of a plurality of weak regression cascades, and then the end-to-end reconstruction is realized by combining a deep learning algorithm, so that a face image is input, a 3D model of the face can be directly output, and further, the position information (such as coordinate data) corresponding to the preset key points in the face to be detected at any moment can be determined according to the 3D model of the face.
In other possible implementation manners, other methods are also adopted to obtain the position information corresponding to the preset key point in the face to be detected at any time, for example, the position information corresponding to the preset key point in the face to be detected at any time is obtained by a manual input mode of the user to be authenticated, which is not particularly limited.
In the embodiment of the application, after the position information corresponding to the preset key points at any moment is obtained on the basis of the segmented regions of the face shown in fig. 2, the corresponding positions of the segmented regions can be further determined according to the sensitivity degree of the different regions of the face to be detected to the facial expression. For example, as shown in fig. 4, it is assumed that the face to be detected has 30 preset key points numbered 1 to 30 shown in fig. 4, and the face to be detected has 6 divided areas of forehead area, eyebrow area, eye area, nose area, mouth area and cheek area outlined by dotted line in fig. 4, and the divided areas to which each preset key point belongs can be determined by combining the position information of each preset key point and the positions of each divided area.
In step 103, the degree of face change of the face to be detected may be determined according to the feature vectors corresponding to different moments and the position information corresponding to the preset key points at different moments. The specific determination modes are various, and one possible implementation mode is that the feature similarity is determined according to feature vectors corresponding to the face to be detected at different moments, the position change degree is determined according to position information corresponding to preset key points in the face to be detected at different moments, and the face change degree of the face to be detected is determined according to the feature similarity and the position change degree.
That is, the face change degree of the face to be detected may be determined according to the formula (1):
Δ=λ 1 ·S-λ 2 d formula (1)
In the formula (1), delta is the face change degree of the face to be detected; s is the characteristic change degree; d, degree of change of position; lambda (lambda) 1 The weight corresponding to the feature variation degree is obtained; lambda (lambda) 2 The weight corresponding to the position change degree.
Considering that in the embodiment of the application, a mode of dividing the face to be detected can be adopted to obtain the feature vectors corresponding to the division areas of the face to be detected at different moments. Based on the above, if the feature vectors corresponding to the segmented regions of the face to be detected at different times are obtained, the feature similarity of each segmented region can be determined according to the feature vectors corresponding to each segmented region at different times; the position change degree of each divided area can be determined according to the position change degree of the preset key point included in each divided area; furthermore, the face change degree of each segmented region can be determined according to the feature similarity and the position change degree of each segmented region, and then the face change degree of the face to be detected can be determined.
Still further, the degree of positional change of each divided region may be determined according to formula (2):
in the formula (2), D i The position change degree of the ith segmentation area is equal to or more than 1 and equal to or less than M, M is the number of segmentation areas in the face to be detected, and M is an integer greater than 1; d, d ij The Euclidean distance of the j preset key points in the i-th partition area at different moments is equal to or more than 1, j is equal to or less than n, n is the number of the preset key points in the i-th partition area, and n is an integer greater than 1.
It should be noted that, the formula (2) is merely an example, and those skilled in the art may also calculate the position change degree of each divided area by using other manners, such as a vector manner, which is not limited in particular.
Further, the face change degree of each divided region can be determined from the feature change degree of each divided region and the position change degree of each divided region. The face change degree of each divided region can be determined according to formula (3):
in the formula (3), delta i The face change degree of the ith segmentation area is equal to or more than 1 and equal to or less than M, M is the number of segmentation areas in the face to be detected, and M is an integer greater than 1; s is S i The characteristic change degree of the ith divided area; d (D) i The position change degree of the i-th divided area; lambda (lambda) 1 The weight corresponding to the feature variation degree is obtained; lambda (lambda) 2 The weight corresponding to the position change degree.
Furthermore, the face change degree of the face to be detected may be determined according to formula (4):
in the formula (4), delta is the face change degree of the face to be detected; delta i The face change degree of the ith segmentation area is equal to or more than 1 and equal to or less than M, M is the number of segmentation areas in the face to be detected, and M is an integer greater than 1; omega i The weight corresponding to the i-th divided region.
It should be noted that, the formula (4) is only an example, and one skilled in the art may also determine the face change degree of the face to be detected in other manners on the basis of obtaining the face change degree of each divided area. For example, as shown in formula (5), another way of determining the face change degree of the face to be detected is shown.
In the formula (5), delta is the face change degree of the face to be detected; if delta i ≤Thr i Then (Thr) ii ) + =1; otherwise, (Thr) ii ) + =0。
In equation (5), according to the sensitivity of the different divided regions to the expression change, a different threshold value (i.e., thr) is set for each divided region i ) The higher the sensitivity, the more noticeable the face changes, the less the similarity, and thus the smaller the threshold value set. In other words, if the similarity delta of a certain divided region i ≤Thr i Then the expression representing the segmented region changes; otherwise, the expression representing the divided region is unchanged.
In other possible implementation manners, the feature vectors corresponding to the face to be detected at different times and the position information corresponding to the preset key points at different times may be input into a pre-trained similarity model, so as to determine the face change degree of the face to be detected, which is not particularly limited.
In step 104, whether the face to be detected passes through the living body detection can be determined by judging whether the face change degree of the face to be detected is greater than a preset threshold value, and if so, the face to be detected can be determined to pass through the living body detection; otherwise, it may be determined that the face to be detected does not pass the living body detection. The preset threshold may be determined by a person skilled in the art according to experience and practical situations, and is not specifically limited.
For example, taking the face change degree of the face to be detected calculated by adopting the formula (4) as an example, if the face change degree delta of the face to be detected is greater than a preset threshold value, determining that the face to be detected changes at different moments, and considering that facial expression change capturing is successful, thereby determining that the person to be detected passes living body detection; otherwise, determining that the face to be detected is not changed at different moments, and considering that facial expression change capturing fails, thereby determining that the person to be detected does not pass through living body detection.
For another example, taking the face change degree of the face to be detected calculated by adopting the formula (5) as an example, if delta is more than or equal to M/2, namely the number of the divided areas which change at different moments in the face to be detected exceeds half, the facial expression change capturing is considered to be successful, so that the fact that the person to be detected passes through living body detection is determined; otherwise, if the number of the divided areas which show that the facial expression changes at different moments in the face to be detected is not more than half, the facial expression change capturing is considered to be failed, and therefore it is determined that the person to be detected does not pass through the living body detection.
In order to more clearly describe the above method for detecting a human face, the following describes the overall procedure of detecting a human face according to the embodiment of the present application with reference to fig. 5. And in particular to what is shown in fig. 5, will not be described in detail here.
In the embodiment of the present application, after executing step 104, face recognition may also be performed, so as to determine whether the face to be detected passes the identity authentication. Specifically, the face recognition mode may be to determine a feature vector corresponding to the face to be detected according to the first feature vector and the second feature vector; furthermore, if it is determined that a similar face of the face to be detected exists in the at least one detected face, it may be determined that the face to be detected passes the identity authentication according to the feature vector corresponding to the face to be detected and the feature vector corresponding to the at least one detected face stored in advance.
In other possible implementation manners, the face recognition manner may also be to recognize the face to be detected by adopting the existing deep neural network model, which is not particularly limited.
Taking the living body detection technology for identity authentication as an example, with reference to fig. 6, an overall description is made of an identity authentication process using the living body detection technology according to the embodiment of the present application. And in particular to what is shown in fig. 6, will not be described in detail here.
Based on the same inventive concept, fig. 7 schematically illustrates a structural diagram of a living body detection apparatus for a human face according to an embodiment of the present application, as shown in fig. 7, the apparatus includes an acquisition unit 201 and a processing unit 202; wherein, the liquid crystal display device comprises a liquid crystal display device,
an obtaining unit 201, configured to obtain feature vectors corresponding to faces to be detected at different times; acquiring position information corresponding to preset key points in the face to be detected at different moments, wherein the position information is the position of the preset key points in the face to be detected; the preset key points are areas capable of representing facial expressions;
a processing unit 202, configured to determine a face change degree of the face to be detected according to the feature vectors corresponding to the different moments and the position information corresponding to the different moments; and if the face change degree of the face to be detected is larger than a preset threshold value, determining that the face to be detected passes through living body detection.
In one possible implementation, the processing unit 202 is specifically configured to:
determining feature similarity according to the feature vectors corresponding to the different moments; determining the position change degree according to the position information corresponding to the different moments; and determining the face change degree of the face to be detected according to the feature similarity and the position change degree.
In one possible implementation manner, the obtaining unit 201 is specifically configured to:
acquiring feature vectors corresponding to each segmented region of the face to be detected at different moments; each segmentation area is determined according to the facial feature position of the face;
the processing unit 202 is specifically configured to:
determining the feature similarity of each divided region according to the feature vectors corresponding to each divided region at different moments;
and determining the face change degree of the face to be detected according to the feature similarity of each segmented region and the position change degree.
In one possible implementation, the processing unit 202 is specifically configured to:
determining a segmentation area to which any preset key point belongs according to any preset key point; determining the face change degree of the segmented region according to the feature similarity of the segmented region and the position change degree of the preset key point; and determining the face change degree of the face to be detected according to the face change degree of each divided area.
In one possible implementation, the segmentation area includes a mouth area, a nose area, a cheek area, an eyebrow area, an eye area, and a forehead area.
In one possible implementation manner, the obtaining unit 201 is specifically configured to:
acquiring position information corresponding to preset key points in the face to be detected at different moments by adopting a time-of-flight TOF technology;
or (b)
And acquiring position information corresponding to the preset key points in the face to be detected at different moments by adopting a 3D face reconstruction technology.
In a possible implementation manner, after determining that the face to be detected is detected by living body, the processing unit 202 is further configured to:
determining a feature vector corresponding to the face to be detected according to the first feature vector and the second feature vector; and determining that the face to be detected passes identity authentication if the similar face of the face to be detected exists in the at least one detected face according to the feature vector corresponding to the face to be detected and the feature vector corresponding to the at least one detected face stored in advance.
The embodiment of the application also provides a device which has the function of realizing the living body detection method of the human face. The functions may be implemented by hardware executing corresponding software, and in one possible design, the apparatus comprises: a processor, transceiver, memory; the memory is used for storing computer-executable instructions, the transceiver is used for realizing the communication between the device and other communication entities, the processor is connected with the memory through the bus, and when the device runs, the processor executes the computer-executable instructions stored in the memory so as to enable the device to execute the living body detection method of the human face.
Embodiments of the present application also provide a computer storage medium having stored therein a software program which, when read and executed by one or more processors, implements the method of in-vivo detection of a human face described in the various possible implementations described above.
Embodiments of the present application also provide a computer program product containing instructions that, when run on a computer, cause the computer to perform the method of in-vivo detection of a human face described in the various possible implementations described above.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (16)

1. A method for in-vivo detection of a human face, the method comprising:
acquiring feature vectors corresponding to each segmented region of the face to be detected at different moments; each segmentation area is determined according to the facial feature position of the face;
acquiring position information corresponding to preset key points in each partitioned area of the face to be detected at different moments, wherein the position information is the position of the preset key points in each partitioned area of the face to be detected; the preset key points are areas capable of representing facial expressions;
determining the face change degree of the face to be detected corresponding to each divided area according to the feature vectors corresponding to the different time points and the position information corresponding to the different time points of each divided area;
determining that the face to be detected passes through living body detection according to the corresponding relation between the face change degree corresponding to each segmentation area and a preset threshold value;
the determining that the face to be detected passes through living body detection according to the corresponding relation between the face change degree corresponding to each divided area and a preset threshold value comprises the following steps:
determining the face change degree of the face to be detected according to the face change degree corresponding to each segmentation area;
determining that the face to be detected passes through living body detection according to the corresponding relation between the face change degree of the face to be detected and a preset threshold value;
the determining the face change degree of the face to be detected according to the face change degree corresponding to each segmented region includes:
determining the face change degree of the face to be detected according to the face change degree corresponding to each divided area and the weight corresponding to each divided area;
or alternatively, the process may be performed,
and determining the face change degree of the face to be detected according to the face change degree corresponding to each divided area and a preset threshold corresponding to each divided area.
2. The method according to claim 1, wherein determining the face change degree of the face to be detected corresponding to each divided region according to the feature vector corresponding to the divided region at the different time and the position information corresponding to the different time includes:
for each of the divided regions, the following operations are performed:
determining feature similarity according to the feature vectors corresponding to the different moments;
determining the position change degree according to the position information corresponding to the different moments;
and determining the face change degree of the face to be detected according to the feature similarity and the position change degree.
3. The method of claim 2, wherein determining feature similarities from the feature vectors corresponding to the different moments comprises:
determining the feature similarity of each divided region according to the feature vectors corresponding to each divided region at different moments;
according to the feature similarity and the position change degree, determining the face change degree of the face to be detected comprises the following steps:
and determining the face change degree of the face to be detected according to the feature similarity of each segmented region and the position change degree.
4. A method according to claim 3, wherein the degree of face change for each segmented region is determined by:
determining a segmentation area to which any preset key point belongs according to any preset key point;
and determining the face change degree of the segmented region according to the feature similarity of the segmented region and the position change degree of the preset key point.
5. A method according to claim 3, wherein the segmentation area comprises a mouth area, a nose area, a cheek area, an eyebrow area, an eye area, and a forehead area.
6. The method according to claim 1, wherein obtaining position information corresponding to a preset key point in each segmented region of the face to be detected at the different time instants includes:
for each of the divided regions, the following operations are performed:
acquiring position information corresponding to preset key points in the face to be detected at different moments by adopting a time-of-flight TOF technology;
or (b)
And acquiring position information corresponding to the preset key points in the face to be detected at different moments by adopting a 3D face reconstruction technology.
7. The method according to any one of claims 1 to 6, characterized by further comprising, after determining that the face to be detected is detected by living body:
determining a feature vector corresponding to the face to be detected according to the first feature vector and the second feature vector;
and determining that the face to be detected passes identity authentication if the similar face of the face to be detected exists in the at least one detected face according to the feature vector corresponding to the face to be detected and the feature vector corresponding to the at least one detected face stored in advance.
8. A living body detection apparatus for a human face, the apparatus comprising:
the acquisition unit is used for acquiring the feature vectors corresponding to the segmented areas of the face to be detected at different moments; each segmentation area is determined according to the facial feature position of the face; the position information corresponding to the preset key points in each partitioned area of the face to be detected at the different time is obtained, and the position information is the position of the preset key points in each partitioned area of the face to be detected; the preset key points are areas capable of representing facial expressions;
the processing unit is used for determining the face change degree of each divided area on the face to be detected according to the feature vectors corresponding to the divided areas at different moments and the position information corresponding to the different moments; determining that the face to be detected passes through living body detection according to the corresponding relation between the face change degree of each divided area and a preset threshold value;
the processing unit is specifically configured to determine a face change degree of the face to be detected according to the face change degree corresponding to each of the segmentation areas; determining that the face to be detected passes through living body detection according to the corresponding relation between the face change degree of the face to be detected and a preset threshold value;
the processing unit is specifically configured to determine a face change degree of the face to be detected according to the face change degree corresponding to each of the divided regions and the weight corresponding to each of the divided regions; or determining the face change degree of the face to be detected according to the face change degree corresponding to each divided area and a preset threshold corresponding to each divided area.
9. The apparatus according to claim 8, wherein the processing unit is specifically configured to:
determining feature similarity according to the feature vectors corresponding to the different moments; determining the position change degree according to the position information corresponding to the different moments; and determining the face change degree of the face to be detected according to the feature similarity and the position change degree.
10. The apparatus according to claim 9, wherein the acquisition unit is specifically configured to:
determining the feature similarity of each divided region according to the feature vectors corresponding to each divided region at different moments;
and determining the face change degree of the face to be detected according to the feature similarity of each segmented region and the position change degree.
11. The apparatus according to claim 10, wherein the processing unit is specifically configured to:
determining a segmentation area to which any preset key point belongs according to any preset key point; and determining the face change degree of the segmented region according to the feature similarity of the segmented region and the position change degree of the preset key point.
12. The apparatus of claim 10, wherein the segmented region comprises a mouth region, a nose region, a cheek region, an eyebrow region, an eye region, and a forehead region.
13. The apparatus according to claim 8, wherein the acquisition unit is specifically configured to:
acquiring position information corresponding to preset key points in the face to be detected at different moments by adopting a time-of-flight TOF technology;
or (b)
And acquiring position information corresponding to the preset key points in the face to be detected at different moments by adopting a 3D face reconstruction technology.
14. The apparatus according to any one of claims 8 to 13, wherein the processing unit, after determining that the face to be detected is detected by a living body, is further configured to:
determining a feature vector corresponding to the face to be detected according to the first feature vector and the second feature vector; and determining that the face to be detected passes identity authentication if the similar face of the face to be detected exists in the at least one detected face according to the feature vector corresponding to the face to be detected and the feature vector corresponding to the at least one detected face stored in advance.
15. A computer readable storage medium storing instructions which, when run on a computer, cause the computer to carry out the method of any one of claims 1 to 7.
16. A computer device, comprising:
a memory for storing program instructions;
a processor for invoking program instructions stored in said memory to perform the method of any of claims 1-7 in accordance with the obtained program.
CN201811572285.0A 2018-12-21 2018-12-21 Living body detection method and device for human face Active CN109766785B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811572285.0A CN109766785B (en) 2018-12-21 2018-12-21 Living body detection method and device for human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811572285.0A CN109766785B (en) 2018-12-21 2018-12-21 Living body detection method and device for human face

Publications (2)

Publication Number Publication Date
CN109766785A CN109766785A (en) 2019-05-17
CN109766785B true CN109766785B (en) 2023-09-01

Family

ID=66450831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811572285.0A Active CN109766785B (en) 2018-12-21 2018-12-21 Living body detection method and device for human face

Country Status (1)

Country Link
CN (1) CN109766785B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132996A (en) * 2019-06-05 2020-12-25 Tcl集团股份有限公司 Door lock control method, mobile terminal, door control terminal and storage medium
CN112395902A (en) * 2019-08-12 2021-02-23 北京旷视科技有限公司 Face living body detection method, image classification method, device, equipment and medium
CN110458098B (en) * 2019-08-12 2023-06-16 上海天诚比集科技有限公司 Face comparison method for face angle measurement
CN110728215A (en) * 2019-09-26 2020-01-24 杭州艾芯智能科技有限公司 Face living body detection method and device based on infrared image
CN111274879B (en) * 2020-01-10 2023-04-25 北京百度网讯科技有限公司 Method and device for detecting reliability of living body detection model
CN111783644B (en) * 2020-06-30 2023-07-14 百度在线网络技术(北京)有限公司 Detection method, detection device, detection equipment and computer storage medium
CN112819986A (en) * 2021-02-03 2021-05-18 广东共德信息科技有限公司 Attendance system and method
CN112927383B (en) * 2021-02-03 2022-12-02 广东共德信息科技有限公司 Cross-regional labor worker face recognition system and method based on building industry
CN112927382B (en) * 2021-02-03 2023-01-10 广东共德信息科技有限公司 Face recognition attendance system and method based on GIS service

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104348778A (en) * 2013-07-25 2015-02-11 信帧电子技术(北京)有限公司 Remote identity authentication system, terminal and method carrying out initial face identification at handset terminal
CN104361326A (en) * 2014-11-18 2015-02-18 新开普电子股份有限公司 Method for distinguishing living human face
CN105389554A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Face-identification-based living body determination method and equipment
CN105447432A (en) * 2014-08-27 2016-03-30 北京千搜科技有限公司 Face anti-fake method based on local motion pattern
CN106850648A (en) * 2015-02-13 2017-06-13 腾讯科技(深圳)有限公司 Auth method, client and service platform
CN107220590A (en) * 2017-04-24 2017-09-29 广东数相智能科技有限公司 A kind of anti-cheating network research method based on In vivo detection, apparatus and system
CN107330914A (en) * 2017-06-02 2017-11-07 广州视源电子科技股份有限公司 Face position method for testing motion and device and vivo identification method and system
CN107346422A (en) * 2017-06-30 2017-11-14 成都大学 A kind of living body faces recognition methods based on blink detection
CN107886070A (en) * 2017-11-10 2018-04-06 北京小米移动软件有限公司 Verification method, device and the equipment of facial image
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium
CN108805047A (en) * 2018-05-25 2018-11-13 北京旷视科技有限公司 A kind of biopsy method, device, electronic equipment and computer-readable medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451510B (en) * 2016-05-30 2023-07-21 北京旷视科技有限公司 Living body detection method and living body detection system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104348778A (en) * 2013-07-25 2015-02-11 信帧电子技术(北京)有限公司 Remote identity authentication system, terminal and method carrying out initial face identification at handset terminal
CN105447432A (en) * 2014-08-27 2016-03-30 北京千搜科技有限公司 Face anti-fake method based on local motion pattern
CN104361326A (en) * 2014-11-18 2015-02-18 新开普电子股份有限公司 Method for distinguishing living human face
CN106850648A (en) * 2015-02-13 2017-06-13 腾讯科技(深圳)有限公司 Auth method, client and service platform
CN105389554A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Face-identification-based living body determination method and equipment
CN107220590A (en) * 2017-04-24 2017-09-29 广东数相智能科技有限公司 A kind of anti-cheating network research method based on In vivo detection, apparatus and system
CN107330914A (en) * 2017-06-02 2017-11-07 广州视源电子科技股份有限公司 Face position method for testing motion and device and vivo identification method and system
CN107346422A (en) * 2017-06-30 2017-11-14 成都大学 A kind of living body faces recognition methods based on blink detection
CN107886070A (en) * 2017-11-10 2018-04-06 北京小米移动软件有限公司 Verification method, device and the equipment of facial image
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium
CN108805047A (en) * 2018-05-25 2018-11-13 北京旷视科技有限公司 A kind of biopsy method, device, electronic equipment and computer-readable medium

Also Published As

Publication number Publication date
CN109766785A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
CN109766785B (en) Living body detection method and device for human face
EP3379458B1 (en) Facial verification method and apparatus
KR102359558B1 (en) Face verifying method and apparatus
KR102483642B1 (en) Method and apparatus for liveness test
CN105612533B (en) Living body detection method, living body detection system, and computer program product
US9985963B2 (en) Method and system for authenticating liveness face, and computer program product thereof
US8515124B2 (en) Method and apparatus for determining fake image
EP2993614A1 (en) Method and apparatus for facial recognition
KR102655949B1 (en) Face verifying method and apparatus based on 3d image
US20160162673A1 (en) Technologies for learning body part geometry for use in biometric authentication
US20070122009A1 (en) Face recognition method and apparatus
CN105518710B (en) Video detecting method, video detection system and computer program product
CN111144293A (en) Human face identity authentication system with interactive living body detection and method thereof
CN109886080A (en) Human face in-vivo detection method, device, electronic equipment and readable storage medium storing program for executing
US12014578B2 (en) Authentication device, authentication method, and recording medium
US11682236B2 (en) Iris authentication device, iris authentication method and recording medium
US10360441B2 (en) Image processing method and apparatus
WO2017092573A1 (en) In-vivo detection method, apparatus and system based on eyeball tracking
WO2015181729A1 (en) Method of determining liveness for eye biometric authentication
US11048926B2 (en) Adaptive hand tracking and gesture recognition using face-shoulder feature coordinate transforms
CN110688878A (en) Living body identification detection method, living body identification detection device, living body identification detection medium, and electronic device
Robin et al. A novel approach to detect & track iris for a different and adverse dataset
KR102674496B1 (en) Method and Apparatus for Detecting Fake Faces Using Remote Photoplethysmography Signals
US20240071135A1 (en) Image processing device, image processing method, and program
Hakobyan et al. Human Identification Using Virtual 3D Imaging to Control Border Crossing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant