CN112926355A - Method and device for detecting living body based on human face - Google Patents

Method and device for detecting living body based on human face Download PDF

Info

Publication number
CN112926355A
CN112926355A CN201911234675.1A CN201911234675A CN112926355A CN 112926355 A CN112926355 A CN 112926355A CN 201911234675 A CN201911234675 A CN 201911234675A CN 112926355 A CN112926355 A CN 112926355A
Authority
CN
China
Prior art keywords
face
image information
face image
living body
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911234675.1A
Other languages
Chinese (zh)
Inventor
李禹源
胡文泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201911234675.1A priority Critical patent/CN112926355A/en
Publication of CN112926355A publication Critical patent/CN112926355A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ophthalmology & Optometry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of computers, and provides a method for detecting a living body based on a human face, which comprises the following steps: acquiring a face image information sequence corresponding to a living body detection instruction, wherein the face image information sequence comprises at least two pieces of face image information; extracting face characteristic angle information corresponding to each piece of face image information; and when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction, judging that the detection result is a living body. In the method, the action of the user is not directly detected, but the face characteristic angle information corresponding to each face image information in the face image information sequence is judged to obtain the result of the living body detection, compared with action recognition, the angle recognition difficulty is low, the processing speed is high, and the problem that the living body detection efficiency is low due to the fact that the action cannot be detected in the action recognition process can be solved by using the face characteristic angle information to carry out the living body recognition, so that the user experience is improved.

Description

Method and device for detecting living body based on human face
Technical Field
The application belongs to the technical field of computers, and particularly relates to a method and a device for living body detection based on human faces.
Background
Face recognition is a biometric technology for identity recognition based on facial feature information of a person. At present, face recognition systems are widely used in various fields such as public security, electronic commerce, finance, social service and the like. In the process of face recognition, it may happen that malicious molecules carry out illegal criminal activities by stealing photos, videos, even counterfeiting face molds and other behaviors, and in order to prevent the malicious molecules from forging the identities of other people to carry out illegal criminal activities, the face recognition needs to use living body detection to prevent illegal attacks.
In the prior art, the living body detection technology can detect whether a living body exists through interactive living body detection by completing actions of opening a mouth, blinking and the like in a matching way by a user. In the current interactive living body detection scheme, the motion of the user is usually directly detected by directly performing motion recognition on a face video or a face image sequence through a neural network. Because the difficulty of motion recognition is high, the problems that the detection time is too long and the living body detection efficiency is low due to the fact that the motion cannot be detected in the process of motion recognition may occur.
Disclosure of Invention
The embodiment of the application provides a method and a device for detecting a living body based on a human face, which can solve the problems of overlong living body detection time and low living body detection efficiency in the prior art.
In a first aspect, an embodiment of the present application provides a method for performing living body detection based on a human face, including:
acquiring a face image information sequence corresponding to a living body detection instruction, wherein the face image information sequence comprises at least two pieces of face image information;
extracting face characteristic angle information corresponding to each piece of face image information;
and when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction, judging that the detection result is a living body.
Further, the liveness detection instruction comprises a turn instruction;
the extracting of the face feature angle information corresponding to each piece of the face image information includes:
recognizing the face image information based on a preset face posture model to obtain posture angle information corresponding to each piece of face image information;
when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction, determining that the detection result is a living body, including:
determining a first head action corresponding to the face image information sequence according to the attitude angle information;
and if the first head action is consistent with a second head action corresponding to the living body detection instruction, judging that the detection result is the living body.
Further, the liveness detection instruction comprises a blinking instruction;
the extracting of the face feature angle information corresponding to each piece of the face image information includes:
extracting face eye key points corresponding to each piece of face image information based on a first key point extraction model;
determining eye included angle information of each piece of face image information based on the position information of the key points of the eyes of the face;
when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction, determining that the detection result is a living body, including:
and if the eye included angle information of at least one piece of face image information in the face image information sequence is smaller than a preset eye included angle threshold value and the eye included angle information of at least one piece of face image information is larger than or equal to the preset eye included angle threshold value, judging that the detection result is a living body.
Further, the extracting, based on the first keypoint extraction model, the face eye keypoints corresponding to each piece of the face image information includes:
and when the target face in the face image information is detected to be the front face, extracting the face eye key points corresponding to each piece of face image information based on a first key point extraction model.
Further, the in-vivo detection instruction comprises a mouth opening instruction;
the extracting of the face feature angle information corresponding to each piece of the face image information includes:
extracting key points of the mouth of the face corresponding to each piece of face image information based on a second key point extraction model;
determining mouth included angle information of each face image information based on the position information of the key points of the face mouth;
when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction, determining that the detection result is a living body, including:
and if the mouth included angle information of at least one piece of face image information in the face image information sequence is smaller than a preset mouth included angle threshold value, and the mouth included angle information of at least one piece of face image information is larger than or equal to the preset mouth included angle threshold value, judging that the detection result is the living body.
Further, the extracting, based on the second keypoint extraction model, the keypoint of the mouth of the face corresponding to each piece of the face image information includes:
and when the target face in the face image information is detected to be the front face, extracting the key points of the mouth of the face corresponding to each piece of the face image information based on a second key point extraction model.
Further, the acquiring of the face image information sequence corresponding to the living body detection instruction includes:
when a living body detection instruction is triggered, acquiring video information to be detected;
and extracting target video frames from the video information to be detected based on a preset extraction strategy, and extracting face image information in each target video frame to obtain a face image information sequence.
In a second aspect, an embodiment of the present application provides an apparatus for performing living body detection based on a human face, including:
the system comprises an acquisition unit, a judgment unit and a display unit, wherein the acquisition unit is used for acquiring a face image information sequence corresponding to a living body detection instruction, and the face image information sequence comprises at least two pieces of face image information;
the extraction unit is used for extracting the face characteristic angle information corresponding to each piece of face image information;
and the judging unit is used for judging that the detection result is the living body when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction.
Further, the liveness detection instruction comprises a turn instruction;
the extraction unit is specifically configured to:
recognizing the face image information based on a preset face posture model to obtain posture angle information corresponding to each piece of face image information;
the determination unit is specifically configured to:
determining a first head action corresponding to the face image information sequence according to the attitude angle information;
and if the first head action is consistent with a second head action corresponding to the living body detection instruction, judging that the detection result is the living body.
Further, the liveness detection instruction comprises a blinking instruction;
the extraction unit includes:
the first processing unit is used for extracting the key points of the human face and the eyes corresponding to the information of each human face image based on a first key point extraction model;
the first determining unit is used for determining the eye included angle information of each piece of face image information based on the position information of the key points of the face eyes;
the determination unit is specifically configured to:
and if the eye included angle information of at least one piece of face image information in the face image information sequence is smaller than a preset eye included angle threshold value and the eye included angle information of at least one piece of face image information is larger than or equal to the preset eye included angle threshold value, judging that the detection result is a living body.
Further, the first processing unit is specifically configured to:
and when the target face in the face image information is detected to be the front face, extracting the face eye key points corresponding to each piece of face image information based on a first key point extraction model.
Further, the in-vivo detection instruction comprises a mouth opening instruction;
the extraction unit includes:
the second processing unit is used for extracting key points of the mouth of the human face corresponding to each piece of human face image information based on a second key point extraction model;
a second determining unit, configured to determine, based on position information of the key points of the face mouth, included angle information of a mouth of each piece of face image information;
the determination unit is specifically configured to:
and if the mouth included angle information of at least one piece of face image information in the face image information sequence is smaller than a preset mouth included angle threshold value, and the mouth included angle information of at least one piece of face image information is larger than or equal to the preset mouth included angle threshold value, judging that the detection result is the living body.
Further, the second processing unit is specifically configured to:
and when the target face in the face image information is detected to be the front face, extracting the key points of the mouth of the face corresponding to each piece of the face image information based on a second key point extraction model.
Further, the obtaining unit is specifically configured to:
when a living body detection instruction is triggered, acquiring video information to be detected;
and extracting target video frames from the video information to be detected based on a preset extraction strategy, and extracting face image information in each target video frame to obtain a face image information sequence.
In a third aspect, an embodiment of the present application provides an apparatus for face-based liveness detection, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the method for face-based liveness detection according to the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method for detecting a living body based on a human face according to the first aspect.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the method for living body detection based on human face as described in the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
In the embodiment of the application, a face image information sequence corresponding to a living body detection instruction is obtained, wherein the face image information sequence comprises at least two pieces of face image information; extracting face characteristic angle information corresponding to each piece of face image information; and when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction, judging that the detection result is a living body. In the method, the action of the user is not directly detected, but the face characteristic angle information corresponding to each face image information in the face image information sequence is judged to obtain the result of the living body detection, compared with action recognition, the angle recognition difficulty is low, the processing speed is high, and the problem that the living body detection efficiency is low due to the fact that the action cannot be detected in the action recognition process can be solved by using the face characteristic angle information to carry out the living body recognition, so that the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a method for living body detection based on human faces according to a first embodiment of the present application;
fig. 2 is a schematic flowchart of a refinement of S101 in a method for performing living body detection based on a human face according to a first embodiment of the present application;
FIG. 3 is a schematic flow chart of another method for face-based live detection according to a second embodiment of the present application;
FIG. 4 is a schematic flow chart of another method for face-based live detection according to a third embodiment of the present application;
FIG. 5 is a schematic flow chart of key points of a face in another method for live detection based on a human face according to a third embodiment of the present application;
FIG. 6 is a schematic flowchart of an eye key point in another method for living body detection based on human face according to a third embodiment of the present application;
FIG. 7 is a schematic flow chart of another method for face-based live detection according to a fourth embodiment of the present application;
fig. 8 is a schematic diagram of key points of a mouth in another method for live detection based on human faces according to a fourth embodiment of the present application;
fig. 9 is a schematic diagram of an apparatus for face-based live body detection according to a fifth embodiment of the present application;
fig. 10 is a schematic diagram of an apparatus for face-based live body detection according to a sixth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
An execution subject of the method for performing living body detection based on a human face provided in the embodiment of the present application is a device having a function of performing living body detection based on a human face, for example, a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, and other terminal devices.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for performing living body detection based on a human face according to a first embodiment of the present application. The method for detecting the living body based on the human face as shown in FIG. 1 can comprise the following steps:
s101: and acquiring a face image information sequence corresponding to the living body detection instruction, wherein the face image information sequence comprises at least two pieces of face image information.
When the living body detection is needed, a user can trigger a living body detection instruction through an interactive interface of the operation equipment, or the user can trigger the living body detection instruction through a voice signal, or the living body detection instruction is triggered when the equipment detects that the face of the user stays in a specified area for more than preset stay time. The living body detection instruction may include, but is not limited to, a head turning instruction, a blinking instruction, a mouth opening instruction, and the like.
When the device detects a living body detection instruction, a to-be-detected face image video in front of an image acquisition device can be acquired through the image acquisition device (such as a camera) arranged in the device, and a face image information sequence is generated according to the face image video; or the equipment acquires the face image information sequence corresponding to the file identification according to the file identification contained in the living body detection instruction. The human face image information sequence comprises at least two pieces of human face image information, and the human face image information can be directly acquired by an image acquisition device or extracted from a human face image video. It can be understood that, since the detection should be performed on the same object when performing the living body detection, the face image information in the face image information sequence should be the face image information corresponding to the same face.
Further, in order to obtain a higher quality face image information sequence as a detection sample, thereby improving the accuracy of the living body detection, S101 may include S1011 to S1012, as shown in fig. 2, where S1011 to S1012 are specifically as follows:
s1011: and when a living body detection instruction is triggered, acquiring video information to be detected.
For the part triggering the living body detection instruction, reference may be made to the relevant description in S101, and details are not described here. When the device triggers the living body detection instruction, video information to be detected is collected, and the video information to be detected is video information corresponding to the living body detection instruction. For example, when the living body detection instruction is a turn instruction, the video information to be detected is the video information of the action of turning the head of the user; when the living body detection instruction comprises a turning instruction and a blinking instruction, the video information to be detected is the video information of the turning and blinking actions of the user.
S1012: and extracting target video frames from the video information to be detected based on a preset extraction strategy, and extracting face image information in each target video frame to obtain a face image information sequence.
The device pre-stores a preset extraction strategy, the preset extraction strategy is used for extracting a target video frame from video information to be detected, and the device extracts the target video frame from the video information to be detected based on the preset extraction strategy. For example, the preset extraction strategy may extract one frame as a target video frame every other five frames in the video information to be detected, and then the device extracts one frame as the target video frame from every other five frames in the video information to be detected based on the preset extraction strategy, so that the extracted target video frame is more balanced, a higher-quality face image information sequence can be obtained as a detection sample, and the accuracy of the in-vivo detection is improved. After the equipment obtains the target video frames, extracting the face image information in each target video frame to obtain a face image information sequence.
S102: and extracting the face characteristic angle information corresponding to each piece of face image information.
And extracting the face characteristic angle information corresponding to each piece of face image information by the equipment. The face feature angle information corresponding to the face image information is used for identifying the behavior of the face. The face feature angle information may include one or more of angle information such as eye angle information, mouth angle information, and pose angle information, which is not limited herein. For example, the device may obtain the eye angle information by obtaining the face feature points corresponding to each piece of face image information and connecting the face feature points to obtain face feature angles; or the device may input the face image information into the face feature angle information extraction model for processing based on a preset face feature angle information extraction model to obtain the pose angle information, which is not limited herein.
S103: and when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction, judging that the detection result is a living body.
The device is preset with angle verification conditions corresponding to different living body detection instructions, wherein the angle verification conditions are used for judging the result of the living body detection, when the face characteristic angle information meets the angle verification conditions corresponding to the living body detection instructions, the detection result is judged to be a living body, and when the face characteristic angle information does not meet the angle verification conditions corresponding to the living body detection instructions, the detection result is judged to be a non-living body, namely the condition of malicious attack is possible. For example, when the living body detection instruction is a left turn instruction, the angle verification condition corresponding to the left turn instruction is that when the face feature angle information is greater than 30 degrees, it is determined that the head of the object to be detected makes a left turn. The device obtains the face feature angle information of the object to be detected to be 45 degrees, the device judges whether the face feature angle information is larger than 30 degrees, when the device detects that the face feature angle information is larger than 30 degrees, the head of the object to be detected performs the action of turning left, and the detection result is judged to be a living body.
In the embodiment of the application, a face image information sequence corresponding to a living body detection instruction is obtained, wherein the face image information sequence comprises at least two pieces of face image information; extracting face characteristic angle information corresponding to each piece of face image information; and when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction, judging that the detection result is a living body. In the method, the action of the user is not directly detected, but the face characteristic angle information corresponding to each face image information in the face image information sequence is judged to obtain the result of the living body detection, compared with action recognition, the angle recognition difficulty is low, the processing speed is high, and the problem that the living body detection efficiency is low due to the fact that the action cannot be detected in the action recognition process can be solved by using the face characteristic angle information to carry out the living body recognition, so that the user experience is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart of another method for performing living body detection based on human faces according to a second embodiment of the present application. When the living body detection instruction includes the turn instruction, in order to accurately and efficiently perform the living body detection, the present embodiment refines S102 in the first embodiment, corresponding to S202, and refines S103 in the first embodiment, corresponding to S203 to S204. S201 in the present embodiment is the same as S101 in the first embodiment. As shown in fig. 3, S202 to S204 are specifically as follows:
s202: and identifying the face image information based on a preset face posture model to obtain posture angle information corresponding to each piece of face image information.
The device is pre-stored with a pre-trained preset human face attitude model, the preset human face attitude model is obtained by training a plurality of training samples in a sample training set by using a machine learning algorithm, each training sample comprises sample human face image information and a corresponding attitude angle label, and the attitude angle label is used for identifying the attitude angle information corresponding to the sample human face image information.
The input of the preset human face posture model is sample human face image information in a training sample and a posture angle label corresponding to the sample human face image information, and the output of the preset human face posture model is the posture angle information corresponding to the sample human face image information.
It can be understood that the preset face pose model can be trained by the device in advance, or a file corresponding to the preset face pose model can be transplanted to the device after being trained by other devices in advance. Specifically, when the deep learning network is trained by other equipment, model parameters of the deep learning network are frozen, and a preset face posture model file corresponding to the frozen deep learning network is transplanted to the equipment.
The equipment inputs the face image information into a preset face posture model, and identifies the face image information to obtain posture angle information corresponding to each piece of face image information. The attitude angle information may include pitch angle (pitch) information, yaw angle (yaw) information, roll angle (roll) information, and the like.
S203: and determining a first head action corresponding to the face image information sequence according to the attitude angle information.
And the equipment determines a first head action corresponding to the face image information sequence according to the attitude angle information. Specifically, when the attitude angle information includes a pitch angle (pitch), a yaw angle (yaw), and a roll angle (roll), the pitch angle (pitch) is data obtained by rotating around the X axis, and thus it is possible to determine whether the head has moved up or down by the pitch angle (pitch); since the yaw angle yaw is data obtained by rotating around the Y axis, it can be determined whether the head makes a left turn or a right turn by the yaw angle yaw; since the roll angle roll is data obtained by rotating around the Z axis, whether the head has performed the head tilting operation can be determined by the roll angle roll.
Taking a living body detection instruction as a left turn as an example, the device acquires attitude angle information of each piece of face image information, namely acquiring a yaw angle yaw of a head in each piece of face image information, and when the yaw angle yaw of at least one piece of face image information in a face image information sequence is smaller than a preset yaw angle threshold value and the yaw angle yaw of at least one piece of face image information is larger than or equal to the preset yaw angle threshold value, a first head movement of the head of the detection object can be judged as a left turn movement; except for the case where the yaw angle yaw of at least one piece of face image information is smaller than the preset yaw angle threshold value and the yaw angle yaw of at least one piece of face image information is greater than or equal to the preset yaw angle threshold value in the face image information sequence, it is possible to determine the first head movement of the head of the detection target as an action in which no left turn occurs.
Further, in the above example, the yaw angle yaw of the head in each of the acquired face image information may be sorted according to the time sequence of the corresponding face image information in the face image information sequence, and when the yaw angle yaw has a preset fluctuation in the time sequence, it may be determined that the head of the detection object makes a left turn, specifically, the face image information sequence includes first face image information, second face image information, third face image information, fourth face image information, and fifth face image information, the face image information in the face image information sequence respectively corresponds to a first yaw angle, a second yaw angle, a third yaw angle, a fourth yaw angle, and a fifth yaw angle, the preset fluctuation may be that the first yaw angle is less than 30 degrees, and the second yaw angle, the third yaw angle, the fourth yaw angle, and the fifth yaw angle are all greater than or equal to 30 degrees, when the first yaw angle, the second yaw angle, the third yaw angle, the fourth yaw angle and the fifth yaw angle meet the preset fluctuation, it can be determined that the head of the detection object makes a left turn.
S204: and if the first head action is consistent with a second head action corresponding to the living body detection instruction, judging that the detection result is the living body.
And the equipment acquires a second head motion corresponding to the living body detection instruction, and when the living body detection instruction is a left turn, the second head motion is used as a left turn of the head. The equipment judges whether the first head action is consistent with the second head action, if the first head action is consistent with the second head action, the object to be detected is judged to perform the second head action corresponding to the living body detection instruction, the detection result is judged to be a living body, otherwise, the object to be detected can be judged to be a non-living body, and the equipment can remind of malicious attack.
Referring to fig. 4, fig. 4 is a schematic flowchart of another method for performing living body detection based on human faces according to a third embodiment of the present application. When the living body detection instruction includes a blink instruction, in order to accurately and efficiently perform the living body detection, the present embodiment refines S102 in the first embodiment, corresponding to S302 to S303, and refines S103 in the first embodiment, corresponding to S304. S301 in the present embodiment is the same as S101 in the first embodiment. As shown in fig. 4, S302 to S304 are specifically as follows:
s302: and extracting the key points of the human face eyes corresponding to each piece of human face image information based on a first key point extraction model.
The equipment extracts the key points of the human face eyes corresponding to each piece of human face image information based on the first key point extraction model.
In one embodiment, the device may directly extract the face-eye key points based on the first key point extraction model and the face image information. The equipment inputs the face image information into the first key point extraction model to determine the face eye key points corresponding to each piece of face image information, and the face eye key points can be accurately obtained through the first key point extraction model.
In another embodiment, the first keypoint extraction model may also extract all facial keypoints, including eye keypoints. The equipment obtains face key points in the face image information through the first key point extraction model, wherein the face key points comprise eye key points.
In another embodiment, the first key point extraction model comprises a face key point detection model and an eye key point detection model, the face image information is input into the face key point detection model to extract face key points, and the eye key points are preliminarily determined from the face key points and then are positioned at the positions of the eyes of the face to obtain an eye image. And inputting the eye images into the eye key point detection model to perform fine positioning processing on the eye key points to obtain the human face and eye key points corresponding to the human face image information. . The device obtains face key points in the face image information through a face key point detection model, wherein the face key points are points for identifying a face, the number of the common face key points is 68, and as shown in fig. 5, the points distributed on a nose, eyes, a mouth and a face contour in the figure are the face key points.
The device is pre-stored with a pre-trained first key point extraction model, the first key point extraction model is obtained by training a plurality of training samples in a sample training set by using a machine learning algorithm, each training sample comprises sample information and an eye key point label corresponding to the sample information, and the eye key point label is used for identifying the eye key point corresponding to the sample information. The input of the first key point extraction model is sample information in the training sample and the corresponding eye key point label, and the output of the first key point extraction model is the eye key point corresponding to the sample information.
It can be understood that the first keypoint extraction model may be pre-trained by the device, or a file corresponding to the first keypoint extraction model may be migrated to the device after being pre-trained by another device. Specifically, when the deep learning network is trained by other equipment, the model parameters of the deep learning network are frozen, and the first key point extraction model file corresponding to the frozen deep learning network is transplanted to the equipment.
Further, in order to accurately acquire the position information of the key points of the eyes of the human face, thereby obtaining an accurate result of the living body detection, S302 may include: and when the target face in the face image information is detected to be the front face, extracting the face eye key points corresponding to each piece of face image information based on a first key point extraction model.
When a user to be detected acts according to the living body detection instruction, image information of a side face may appear in the face image information. When the target face in the face image information is a side face, the position information of the key points of the face in the face image information may not be accurately extracted, and the position information of the key points of the eyes of the face corresponding to the face image information may not be accurately obtained, which may affect the result of the in vivo detection. Therefore, in order to completely extract the position information of the face key points, before extracting the face eye key points, detecting whether the target face in the face image information is a front face or not can be performed. In the detection process, whether the target face is a front face or not can be judged through the posture angle information of the face in the face image information, for example, a posture angle threshold value can be preset, when the posture angle information is smaller than the preset posture angle threshold value, the face in the face image information is judged to be the front face at the moment, and the position information of key points of the face can be completely extracted. The obtaining manner of the attitude angle information may refer to the related description of S202, and is not described herein again.
When the target face in the face image information is detected to be the front face, extracting the face eye key points corresponding to each piece of face image information based on a first key point extraction model. For this part, reference may be made to the related description of S302, which is not described herein again.
S303: and determining the eye included angle information of each piece of face image information based on the position information of the key points of the eyes of the face.
The equipment determines the eye included angle information of each piece of face image information based on the position information of the key points of the face eyes. When eyes of a human body blink, the process from eye opening to eye closing is carried out, and when the eyes are opened, the eyes have an angle, wherein the angle information of the included angle of the eyes of the face image information identifies the opening angle of the eyes in the face image information, and the angle information of the included angle of the eyes can accurately identify whether the eye state is open or closed. For example, when the eye included angle information is greater than or equal to 5 degrees, it may be determined that the eye is in an open state, and when the eye included angle information is less than 5 degrees, it may be determined that the eye is in a closed state.
In this embodiment, the eye included angle information of each piece of face image information is determined based on the position information of the key points of the eyes, for example, as shown in fig. 6, points 37 to 42 in the figure are key points of the eyes of the human face, wherein, as shown in the figure, an acute angle formed by three points, namely a midpoint between a point 38 and a point 42, a point 39, and a point 41, is the eye included angle information, and the eye included angle information can be obtained by calculating the angle of the acute angle formed by the three points. Specifically, the following formula can be used to calculate the angle information of the included angle of the eye:
Pleft=(P38+P42)/2
Figure BDA0002304570820000111
wherein, PleftPosition information of the midpoint of the point 38(P38) and the point 42(P42) is indicated, and angle indicates eye angle information.
It is understood that the apparatus in this embodiment may determine the living body detection result by determining the eye state of a single eye, and when it is detected that a blinking motion is performed by the single eye, determine the detection result as a living body; alternatively, the apparatus may determine the living body detection result by determining eye states of both eyes, and determine that the detection result is a living body when it is detected that both eyes blink.
S304: and if the eye included angle information of at least one piece of face image information in the face image information sequence is smaller than a preset eye included angle threshold value and the eye included angle information of at least one piece of face image information is larger than or equal to the preset eye included angle threshold value, judging that the detection result is a living body.
The method comprises the steps that an eye included angle threshold value is stored in the device in advance, the preset eye included angle threshold value is used for judging whether the eye state is open or closed, the device judges the size between eye included angle information of face image information in a face image information sequence and the preset eye included angle threshold value, when the eye included angle information of at least one piece of face image information in the face image information sequence is smaller than the preset eye included angle threshold value, and the eye included angle information of at least one piece of face image information is larger than or equal to the preset eye included angle threshold value, the fact that the object to be detected does eye opening to eye closing movement once at the moment is indicated, the fact that the object to be detected accords with eye blinking movement corresponding to a living body detection instruction is judged, and.
Referring to fig. 7, fig. 7 is a schematic flowchart of another method for performing living body detection based on human faces according to a fourth embodiment of the present application. When the living body detection instruction includes a mouth opening instruction, in order to accurately and efficiently perform the living body detection, the present embodiment refines S102 in the first embodiment, corresponding to S402 to S403, and refines S103 in the first embodiment, corresponding to S404. S401 in the present embodiment is the same as S101 in the first embodiment. As shown in fig. 7, S402 to S404 are specifically as follows:
s402: and extracting the key points of the mouth of the face corresponding to each piece of face image information based on a second key point extraction model.
And extracting the key points of the mouth of the human face corresponding to each piece of human face image information by the equipment based on the second key point extraction model. The first keypoint extraction model may or may not be identical to the second keypoint extraction model.
In one embodiment, the device may directly extract the key points of the face mouth based on the second key point extraction model and the face image information. The equipment inputs the face image information into a second key point extraction model to determine the key points of the mouth of the face corresponding to each piece of face image information, and the key points of the mouth of the face can be accurately obtained through the second key point extraction model.
In another embodiment, the second keypoint extraction model may also extract all facial keypoints, including the mouth keypoints. And the equipment acquires face key points in the face image information through the second key point extraction model, wherein the face key points comprise key points of the mouth.
In another embodiment, the second key point extraction model includes a face key point detection model and a mouth key point detection model, the face image information is input into the face key point detection model to extract face key points, and the mouth key points are preliminarily determined from the face key points and then positioned to the face mouth position to obtain a mouth image. And inputting the mouth picture into a mouth key point detection model to perform fine positioning processing on the mouth key points to obtain the face mouth key points corresponding to the face image information. The device acquires the position information of the key point of the face in the face image information, which may refer to the relevant description in S302, and is not described herein again.
The device is pre-stored with a pre-trained second key point extraction model, the second key point extraction model is obtained by training a plurality of training samples in a sample training set by using a machine learning algorithm, each training sample comprises sample information and a mouth key point label corresponding to the sample information, and the mouth key point label is used for identifying the mouth key point corresponding to the sample information. The input of the second key point extraction model is sample information in the training sample and corresponding mouth key point labels thereof, and the output of the second key point extraction model is the mouth key points corresponding to the sample information.
It can be understood that the second keypoint extraction model may be pre-trained by the device, or a file corresponding to the second keypoint extraction model may be migrated to the device after being pre-trained by another device. Specifically, when the deep learning network is trained, other devices freeze model parameters of the deep learning network, and the second key point extraction model file corresponding to the frozen deep learning network is transplanted to the devices.
Further, in order to accurately acquire the position information of the key point in the mouth of the human face, thereby obtaining an accurate result of the living body detection, S402 may include: and when the target face in the face image information is detected to be the front face, extracting the key points of the mouth of the face corresponding to each piece of the face image information based on a second key point extraction model. The intention and the method of determining that the target face is a front face in this embodiment are the same as those in the detailed embodiment of S302 in the third embodiment, and reference may be made to the description related to the previous embodiment. The method and details for extracting the key points of the mouth of the face corresponding to each piece of the face image information based on the second key point extraction model can be referred to the relevant description in S402.
S403: and determining the angle information of the mouth included angle of each piece of face image information based on the position information of the key points of the mouth of the face.
Determining the mouth included angle information of each face image information based on the key point position information of the face mouth, wherein when the mouth of a human body is in a mouth opening state, the mouth can have an angle, the mouth opening angle information of the face image information identifies the opening angle of the mouth in the face image information, and the mouth state which can be accurately identified by the mouth included angle information is opened or closed. For example, when the mouth angle information is equal to or greater than 8 degrees, it may be determined that the mouth is open, and when the mouth angle information is less than 5 degrees, it may be determined that the mouth is closed.
In the present embodiment, the mouth angle information of each piece of face image information is determined based on the mouth angle information, as shown in fig. 8, points 51 to 68 in the figure are key points of the mouth of the face, wherein, as shown in the figure, an acute angle formed by three points, namely a midpoint between a point 62 and a point 68, a point 63, and a point 67, is the mouth angle information, and the mouth angle information can be obtained by calculating an angle of the acute angle formed by the three points. Specifically, the following formula can be used to calculate the angle information of the mouth included angle:
Pleft=(P62+P68)/2
Figure BDA0002304570820000131
wherein, PleftPosition information of the midpoint of the point 62(P62) and the point 68(P68) is indicated, and angle indicates eye angle information.
S404: and if the mouth included angle information of at least one piece of face image information in the face image information sequence is smaller than a preset mouth included angle threshold value, and the mouth included angle information of at least one piece of face image information is larger than or equal to the preset mouth included angle threshold value, judging that the detection result is the living body.
The method comprises the steps that a mouth included angle threshold value is prestored in equipment, the preset mouth included angle threshold value is used for judging whether a mouth state is opened or closed, the equipment judges the size between the mouth included angle information of face image information in a face image information sequence and the preset mouth included angle threshold value, when the mouth included angle information with at least one piece of face image information in the face image information sequence is smaller than the preset mouth included angle threshold value, the mouth included angle information with at least one piece of face image information is larger than or equal to the preset mouth included angle threshold value, the condition that an object to be detected performs a mouth opening action at the moment is shown, the mouth opening action corresponding to a living body detection instruction is met, and the fact that a detection result.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Referring to fig. 9, fig. 9 is a schematic diagram of a device for performing living body detection based on a human face according to a fifth embodiment of the present application. The units are included for executing the steps in the embodiments corresponding to fig. 1 to 4 and 7. Please refer to the related description of the embodiments corresponding to fig. 1 to 4 and fig. 7. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 9, the apparatus 9 for face-based live body detection includes:
an obtaining unit 910, configured to obtain a face image information sequence corresponding to a living body detection instruction, where the face image information sequence includes at least two pieces of face image information;
an extracting unit 920, configured to extract face feature angle information corresponding to each piece of face image information;
a determining unit 930, configured to determine that the detection result is a living body when the face feature angle information satisfies an angle verification condition corresponding to the living body detection instruction.
Further, the liveness detection instruction comprises a turn instruction;
the extracting unit 920 is specifically configured to:
recognizing the face image information based on a preset face posture model to obtain posture angle information corresponding to each piece of face image information;
determination unit 930 is specifically configured to:
determining a first head action corresponding to the face image information sequence according to the attitude angle information;
and if the first head action is consistent with a second head action corresponding to the living body detection instruction, judging that the detection result is the living body.
Further, the liveness detection instruction comprises a blinking instruction;
the extracting unit 920 includes:
the first processing unit is used for extracting the key points of the human face and the eyes corresponding to the information of each human face image based on a first key point extraction model;
the first determining unit is used for determining the eye included angle information of each piece of face image information based on the position information of the key points of the face eyes;
determination unit 930 is specifically configured to:
and if the eye included angle information of at least one piece of face image information in the face image information sequence is smaller than a preset eye included angle threshold value and the eye included angle information of at least one piece of face image information is larger than or equal to the preset eye included angle threshold value, judging that the detection result is a living body.
Further, the first processing unit is specifically configured to:
and when the target face in the face image information is detected to be the front face, extracting the face eye key points corresponding to each piece of face image information based on a first key point extraction model.
Further, the in-vivo detection instruction comprises a mouth opening instruction;
the extracting unit 920 includes:
the second processing unit is used for extracting key points of the mouth of the human face corresponding to each piece of human face image information based on a second key point extraction model;
the second determining unit is used for determining the mouth included angle information of each piece of face image information based on the position information of the key points of the face mouth;
determination unit 930 is specifically configured to:
and if the mouth included angle information of at least one piece of face image information in the face image information sequence is smaller than a preset mouth included angle threshold value, and the mouth included angle information of at least one piece of face image information is larger than or equal to the preset mouth included angle threshold value, judging that the detection result is the living body.
Further, the second processing unit is specifically configured to:
and when the target face in the face image information is detected to be the front face, extracting the key points of the mouth of the face corresponding to each piece of the face image information based on a second key point extraction model.
Further, the obtaining unit 910 is specifically configured to:
when a living body detection instruction is triggered, acquiring video information to be detected;
and extracting target video frames from the video information to be detected based on a preset extraction strategy, and extracting face image information in each target video frame to obtain a face image information sequence.
Fig. 10 is a schematic diagram of an apparatus for face-based live body detection according to a sixth embodiment of the present application. As shown in fig. 10, the apparatus 10 for face-based live body detection of this embodiment includes: a processor 100, a memory 101 and a computer program 102 stored in said memory 101 and executable on said processor 100, such as a program for live detection based on a human face. The processor 100, when executing the computer program 102, implements the steps in each of the above-described embodiments of the method for face-based liveness detection, such as the steps 101 to 103 shown in fig. 1. Alternatively, the processor 100, when executing the computer program 102, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 910 to 930 shown in fig. 9.
Illustratively, the computer program 102 may be partitioned into one or more modules/units that are stored in the memory 101 and executed by the processor 100 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 102 in the apparatus 10 for face-based living body detection. For example, the computer program 102 may be divided into an acquisition unit, an extraction unit, and a determination unit, and each unit has the following specific functions:
the system comprises an acquisition unit, a judgment unit and a display unit, wherein the acquisition unit is used for acquiring a face image information sequence corresponding to a living body detection instruction, and the face image information sequence comprises at least two pieces of face image information;
the extraction unit is used for extracting the face characteristic angle information corresponding to each piece of face image information;
and the judging unit is used for judging that the detection result is the living body when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction.
The apparatus for detecting living body based on human face may include, but is not limited to, a processor 100 and a memory 101. Those skilled in the art will appreciate that fig. 10 is merely an example of a device 10 for face-based liveness detection and does not constitute a limitation of the device 10 for face-based liveness detection, and may include more or fewer components than those shown, or some components in combination, or different components, for example, the device for face-based liveness detection may further include an input-output device, a network access device, a bus, etc.
The Processor 100 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 101 may be an internal storage unit of the apparatus for human face based liveness detection 10, such as a hard disk or a memory of the apparatus for human face based liveness detection 10. The memory 101 may also be an external storage device of the apparatus for human face based live detection 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), and the like, which are equipped on the apparatus for human face based live detection 10. Further, the memory 101 may also include both an internal storage unit and an external storage device of the human face-based living body detection apparatus 10. The memory 101 is used to store the computer program and other programs and data required by the apparatus for face-based liveness detection. The memory 101 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a network device, where the network device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the various method embodiments described above when executing the computer program.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for detecting living bodies based on human faces is characterized by comprising the following steps:
acquiring a face image information sequence corresponding to a living body detection instruction, wherein the face image information sequence comprises at least two pieces of face image information;
extracting face characteristic angle information corresponding to each piece of face image information;
and when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction, judging that the detection result is a living body.
2. The method for human face-based live body detection according to claim 1, wherein the live body detection instruction comprises a turn instruction;
the extracting of the face feature angle information corresponding to each piece of the face image information includes:
recognizing the face image information based on a preset face posture model to obtain posture angle information corresponding to each piece of face image information;
when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction, determining that the detection result is a living body, including:
determining a first head action corresponding to the face image information sequence according to the attitude angle information;
and if the first head action is consistent with a second head action corresponding to the living body detection instruction, judging that the detection result is the living body.
3. The method for face-based liveness detection according to claim 1, wherein the liveness detection instruction comprises a blinking instruction;
the extracting of the face feature angle information corresponding to each piece of the face image information includes:
extracting face eye key points corresponding to each piece of face image information based on a first key point extraction model;
determining eye included angle information of each piece of face image information based on the position information of the key points of the eyes of the face;
when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction, determining that the detection result is a living body, including:
and if the eye included angle information of at least one piece of face image information in the face image information sequence is smaller than a preset eye included angle threshold value and the eye included angle information of at least one piece of face image information is larger than or equal to the preset eye included angle threshold value, judging that the detection result is a living body.
4. The method for human face-based live body detection according to claim 1, wherein the live body detection instruction comprises a mouth opening instruction;
the extracting of the face feature angle information corresponding to each piece of the face image information includes:
extracting key points of the mouth of the face corresponding to each piece of face image information based on a second key point extraction model;
determining mouth included angle information of each face image information based on the position information of the key points of the face mouth;
when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction, determining that the detection result is a living body, including:
and if the mouth included angle information of at least one piece of face image information in the face image information sequence is smaller than a preset mouth included angle threshold value, and the mouth included angle information of at least one piece of face image information is larger than or equal to the preset mouth included angle threshold value, judging that the detection result is the living body.
5. The method for live body detection based on human face according to any one of claims 1-4, wherein the acquiring of the human face image information sequence corresponding to the live body detection instruction comprises:
when a living body detection instruction is triggered, acquiring video information to be detected;
and extracting target video frames from the video information to be detected based on a preset extraction strategy, and extracting face image information in each target video frame to obtain a face image information sequence.
6. An apparatus for detecting a living body based on a human face, comprising:
the system comprises an acquisition unit, a judgment unit and a display unit, wherein the acquisition unit is used for acquiring a face image information sequence corresponding to a living body detection instruction, and the face image information sequence comprises at least two pieces of face image information;
the extraction unit is used for extracting the face characteristic angle information corresponding to each piece of face image information;
and the judging unit is used for judging that the detection result is the living body when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction.
7. The apparatus for human face-based live body detection according to claim 6, wherein the live body detection instruction comprises a turn instruction;
the extraction unit is specifically configured to:
recognizing the face image information based on a preset face posture model to obtain posture angle information corresponding to each piece of face image information;
the determination unit is specifically configured to:
determining a first head action corresponding to the face image information sequence according to the attitude angle information;
and if the first head action is consistent with a second head action corresponding to the living body detection instruction, judging that the detection result is the living body.
8. The apparatus for human face-based liveness detection of claim 6, wherein the liveness detection instruction comprises a blink instruction;
the extracting of the face feature angle information corresponding to each piece of the face image information includes:
extracting face eye key points corresponding to each piece of face image information based on a first key point extraction model;
determining eye included angle information of each piece of face image information based on the position information of the key points of the eyes of the face;
when the face feature angle information meets the angle verification condition corresponding to the living body detection instruction, determining that the detection result is a living body, including:
and if the eye included angle information of at least one piece of face image information in the face image information sequence is smaller than a preset eye included angle threshold value and the eye included angle information of at least one piece of face image information is larger than or equal to the preset eye included angle threshold value, judging that the detection result is a living body.
9. An apparatus for face-based liveness detection, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 5.
CN201911234675.1A 2019-12-05 2019-12-05 Method and device for detecting living body based on human face Pending CN112926355A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911234675.1A CN112926355A (en) 2019-12-05 2019-12-05 Method and device for detecting living body based on human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911234675.1A CN112926355A (en) 2019-12-05 2019-12-05 Method and device for detecting living body based on human face

Publications (1)

Publication Number Publication Date
CN112926355A true CN112926355A (en) 2021-06-08

Family

ID=76161359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911234675.1A Pending CN112926355A (en) 2019-12-05 2019-12-05 Method and device for detecting living body based on human face

Country Status (1)

Country Link
CN (1) CN112926355A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512632A (en) * 2015-12-09 2016-04-20 北京旷视科技有限公司 In vivo detection method and device
CN106960177A (en) * 2015-02-15 2017-07-18 北京旷视科技有限公司 Living body faces verification method and system, living body faces checking device
CN107748876A (en) * 2017-11-06 2018-03-02 杭州有盾网络科技有限公司 Face vivo identification method, device and mobile terminal based on mobile terminal
US20180349682A1 (en) * 2017-05-31 2018-12-06 Facebook, Inc. Face liveness detection
CN109522798A (en) * 2018-10-16 2019-03-26 平安科技(深圳)有限公司 Video anticounterfeiting method, system, device based on vivo identification and can storage medium
CN110334637A (en) * 2019-06-28 2019-10-15 百度在线网络技术(北京)有限公司 Human face in-vivo detection method, device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960177A (en) * 2015-02-15 2017-07-18 北京旷视科技有限公司 Living body faces verification method and system, living body faces checking device
CN105512632A (en) * 2015-12-09 2016-04-20 北京旷视科技有限公司 In vivo detection method and device
US20180349682A1 (en) * 2017-05-31 2018-12-06 Facebook, Inc. Face liveness detection
CN107748876A (en) * 2017-11-06 2018-03-02 杭州有盾网络科技有限公司 Face vivo identification method, device and mobile terminal based on mobile terminal
CN109522798A (en) * 2018-10-16 2019-03-26 平安科技(深圳)有限公司 Video anticounterfeiting method, system, device based on vivo identification and can storage medium
CN110334637A (en) * 2019-06-28 2019-10-15 百度在线网络技术(北京)有限公司 Human face in-vivo detection method, device and storage medium

Similar Documents

Publication Publication Date Title
CN109886697B (en) Operation determination method and device based on expression group and electronic equipment
CN107133608A (en) Identity authorization system based on In vivo detection and face verification
CN106778450B (en) Face recognition method and device
US10922399B2 (en) Authentication verification using soft biometric traits
CN108108711B (en) Face control method, electronic device and storage medium
CN107844742B (en) Facial image glasses minimizing technology, device and storage medium
CN110163096B (en) Person identification method, person identification device, electronic equipment and computer readable medium
WO2022062379A1 (en) Image detection method and related apparatus, device, storage medium, and computer program
CN112507889A (en) Method and system for verifying certificate and certificate holder
CN104036254A (en) Face recognition method
CN110688878B (en) Living body identification detection method, living body identification detection device, living body identification detection medium, and electronic device
CN114299569A (en) Safe face authentication method based on eyeball motion
WO2023273616A1 (en) Image recognition method and apparatus, electronic device, storage medium
CN114861241A (en) Anti-peeping screen method based on intelligent detection and related equipment thereof
CN113837006B (en) Face recognition method and device, storage medium and electronic equipment
CN106980818B (en) Personalized preprocessing method, system and terminal for face image
Fourati et al. Face anti-spoofing with image quality assessment
CN112926355A (en) Method and device for detecting living body based on human face
CN114663930A (en) Living body detection method and device, terminal equipment and storage medium
Muthukumaran et al. Face and Iris based Human Authentication using Deep Learning
Rohith et al. Hands-Free Eye Gesture Authentication Using Deep Learning and Computer Vision Principles
RU2791846C2 (en) Method and device for decision-making to perform operation based on groups of face expressions and electronic device
CN113536953B (en) Face recognition method and device, electronic equipment and storage medium
Srivastava et al. A Machine Learning and IoT-based Anti-spoofing Technique for Liveness Detection and Face Recognition
Ibrahim et al. Security Authentication for Student Cards’ Biometric Recognition Using Viola-Jones Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination