CN111898529B - Face detection method and device, electronic equipment and computer readable medium - Google Patents

Face detection method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN111898529B
CN111898529B CN202010746706.8A CN202010746706A CN111898529B CN 111898529 B CN111898529 B CN 111898529B CN 202010746706 A CN202010746706 A CN 202010746706A CN 111898529 B CN111898529 B CN 111898529B
Authority
CN
China
Prior art keywords
image
target image
determining
detection
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010746706.8A
Other languages
Chinese (zh)
Other versions
CN111898529A (en
Inventor
王旭
陈�胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010746706.8A priority Critical patent/CN111898529B/en
Publication of CN111898529A publication Critical patent/CN111898529A/en
Application granted granted Critical
Publication of CN111898529B publication Critical patent/CN111898529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The embodiment of the disclosure discloses a face detection method, a face detection device, an electronic device and a computer readable medium. One embodiment of the face detection method includes: determining whether the face posture of the target organism in the target image is normal or not based on the pre-acquired image; in response to determining that the facial pose is normal, determining whether a silent liveness detection is required for the target image; responding to the fact that the target image needs to be subjected to silent living body detection, and performing silent living body detection on the target image to obtain a first detection score; and determining whether the target image is obtained by shooting the target organism by the shooting equipment or not based on the first detection score. According to the embodiment, through the face posture of the target organism in the target image and the silence living body detection on the target image, whether the target image is obtained by shooting the target organism by the shooting equipment can be accurately and conveniently determined, and further, the safety of face detection is improved.

Description

Face detection method and device, electronic equipment and computer readable medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a face detection method, an apparatus, an electronic device, and a computer-readable medium.
Background
In order to improve the experience of face recognition and reduce the detection of living bodies by the action coordination of users, a living body detection technology, namely silent living body detection, is researched. The silent living body detection does not need a target object to perform complicated facial actions, and can perform live body verification only by shooting an image of the target object in real time. The face video played by the target object through the display can be strictly verified and identified, and video playback attack is prevented.
However, the silent living body detection only for the target image has the problems of low accuracy and poor safety.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose face detection methods, apparatuses, devices and computer readable media to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a face detection method, the method comprising: determining whether the face posture of the target organism in the target image is normal or not based on the pre-acquired image; in response to determining that the face pose is normal, determining whether silent live body detection is required for the target image; responding to the fact that the target image needs to be subjected to silent live body detection, and performing silent live body detection on the target image to obtain a first detection score; and determining whether the target image is obtained by shooting the target organism by the shooting equipment or not based on the first detection score.
In a second aspect, some embodiments of the present disclosure provide a face detection method, the apparatus including: a first determination unit configured to determine whether a face pose of a target organism in a target image is normal based on an image acquired in advance; a second determination unit configured to determine whether or not silent live body detection is required for the target image in response to a determination that the face pose is normal; the detection unit is configured to respond to the determination that the silent living body detection needs to be carried out on the target image, carry out the silent living body detection on the target image and obtain a detection score; a third determining unit configured to determine whether the target image is an image obtained by an imaging device imaging the target organism based on the detection score.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method according to any one of the first aspects.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, where the program, when executed by a processor, performs the method as in any one of the first aspect.
One of the above various embodiments of the present disclosure has the following beneficial effects: first, based on an image acquired in advance, it is determined whether the facial pose of the target organism in the target image is normal. It is possible to ensure that the face pose in the above-described target image to be subjected to silent live body detection is normal. Then, in response to determining that the face pose is normal, it may be determined whether or not silent live body detection is performed on the target image. And responding to the target image to perform silent living body detection. Furthermore, the silent living body detection is carried out on the target object, and the obtained first detection score is used for accurately determining whether the target image is obtained by shooting the target living body by the shooting equipment. The face detection method can accurately and quickly determine whether the target image is obtained by shooting the target organism by the shooting equipment through the face posture of the target organism in the target image and the silence living body detection on the target image. Further, the side improves the safety of face detection of the target living body.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of an application scenario of the face detection method of some embodiments of the present disclosure;
FIG. 2 is a flow diagram of some embodiments of a face detection method according to the present disclosure;
FIG. 3 is a flow diagram of further embodiments of a face detection method according to the present disclosure;
FIG. 4 is a schematic illustration of determining a second global value in accordance with some embodiments of the face detection method of the present disclosure;
FIG. 5 is a diagram of an application scenario when a face pose is abnormal, according to some embodiments of the face detection method of the present disclosure;
FIG. 6 is a schematic block diagram of some embodiments of a face detection method according to the present disclosure;
FIG. 7 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram 100 of one application scenario of a face detection method according to some embodiments of the present disclosure.
As shown in fig. 1, the electronic device 101 determines whether the face pose of the target organism in the target image 103 is normal or not from the image 102 acquired in advance. Then, in response to determining that the face pose is normal, the electronic device 101 may determine whether silent live body detection is required for the target image 103. As an example, it is determined whether the above-described target image 103 is input into the silent living body detection network 104. Then, in response to determining that silent liveness detection is required for the target image 103, silent liveness detection is performed for the target image 103, resulting in a first detection score 105. As an example, the first detection score 105 may be 0.3 or 0.9. Finally, whether the target image 103 is obtained by the photographing apparatus photographing the target organism is determined by the first detection score 105. As an example, in response to the set threshold being 0.5, the first detection score 105 may be 0.3, and it is determined that the target image 103 is not obtained by the photographing apparatus photographing the target organism. As another example, in response to the set threshold being 0.5, the first detection score 105 may be 0.9, and the target image 103 is determined to be obtained by the photographing apparatus photographing the target organism.
It should be noted that the face detection method may be executed by the electronic device 101. The electronic device 101 may be hardware or software. When the electronic device is hardware, the electronic device may be implemented as a distributed cluster formed by a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the electronic device 101 is embodied as software, it may be implemented as multiple pieces of software or software modules, for example, to provide distributed services, or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of electronic devices in fig. 1 is merely illustrative. There may be any number of electronic devices, as desired for implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of a face detection method according to the present disclosure is shown. The face detection method comprises the following steps:
step 201, based on the pre-acquired image, determining whether the face pose of the target organism in the target image is normal.
In some embodiments, a subject (e.g., the electronic device 101 shown in fig. 1) performing the face detection method may determine whether the facial pose of the target living body in the target image is normal through a pre-acquired image. The target image may be an image of a posture of a target living body to be determined. The facial pose of the target living body may be an inclination angle of each position of the face. As an example, based on the image acquired in advance, it may be determined whether the facial pose of the target organism in the target image is normal by receiving pose comparison result information manually input.
In some optional implementations of some embodiments, the pre-acquired image is a biological image in which facial features of a biological body in the image satisfy a predetermined condition. The preset condition may include, but is not limited to, at least one of the following: the inclination of the face is less than a predetermined inclination degree, and the opening and closing degree of the mouth of the face is less than a predetermined size.
In these implementations, determining whether the facial pose of the target organism in the target image is normal based on the pre-acquired image may include the steps of:
first, the facial pose of the target organism in the acquired image is determined. As an example, first, key points of the above-described acquired image may be extracted. The facial pose of the organism in the acquired image may then be determined by comparing the key points to a table representing the correspondence of facial key points to the facial pose of the target organism. In practice, the pre-acquired target image may be input to a pre-trained face key point extraction network to obtain the face key points. Wherein, the face key point extraction network may be one of the following: vgg (visual Geometry group) network, Deep residual network (respet).
And secondly, determining the similarity between the face posture of the organism in the image and the face posture of the target organism in the target image. As an example, the execution subject may first identify a second facial key point of the target organism in the target image. Then, a first coordinate corresponding to the first facial key point and a second coordinate corresponding to the second facial key point can be determined by looking at the corresponding feature map. Finally, the cosine values of the first coordinate and the second coordinate are obtained as the similarity.
And thirdly, determining that the face posture of the target organism in the target image is normal in response to the similarity being smaller than a preset fourth threshold.
It should be noted that, compared with the method of determining whether the facial pose of the target organism in the target image is normal by receiving the manually input pose comparison result information, the method of solving the similarity and further determining whether the facial pose of the target organism in the target image is normal is more accurate and effective.
Step 202, in response to determining that the face pose is normal, determining whether silent live body detection needs to be performed on the target image.
In some embodiments, in response to determining that the facial pose is normal, the performing subject may determine whether silent liveness detection of the target image is required. The silence living body detection can be used for carrying out real-person living body verification only by requiring the user to shoot an image in real time without carrying out complicated facial actions by the user. The face video played by the user through the display can be strictly checked and identified, and video playback attack is prevented. As an example, first, images in the target biological image set may be selected at intervals of a predetermined number of frames. And then, marking the selected images to obtain a marked image set. And determining that silent live body detection needs to be carried out on the target image in response to the target image being one of the other selected images and the face pose being determined to be normal. Wherein the marker image may be an image to be detected.
In some optional implementations of some embodiments, the determining whether the silent liveness detection on the target image is required in response to determining that the facial pose is normal may include:
in response to determining that the facial pose is normal, determining whether a calibrated image exists in a set of images acquired prior to the target image. Wherein, the calibrated image is an image which passes through the silent living body detection.
And a second step of determining that silent living body detection is not required to be performed on the target image in response to determining that the calibrated image is included in the image set.
And thirdly, determining that the target image needs to be subjected to silent living body detection in response to determining that the image set does not comprise the calibrated image.
It should be noted that, in response to the silent live body detection being performed on the image on which the target living body face recognition task is performed, no other recognition task is performed regardless of whether the silent live body detection is passed.
Furthermore, in response to determining that the facial pose is normal, determining that silent liveness detection of the target image is not required by determining that the image collection package does not include the calibrated image may greatly reduce the amount of operational computations and shorten the duration of performing the target organism identification task.
Optionally, in response to determining that silent live body detection is not required for the target image, performing other identification operations on the target image. Wherein the above-mentioned identification operation may include, but is not limited to, at least one of the following: blinking operation, shaking operation, and nodding operation.
Step 203, responding to the determination that the silent living body detection needs to be performed on the target image, performing the silent living body detection on the target image, and obtaining a first detection score.
In some embodiments, in response to determining that silent liveness detection is required for the target image, the executing subject may perform silent liveness detection on the target image, resulting in a first detection score.
As an example, first, a target biological frame of the target image may be extracted by a Multi-task cascaded convolutional neural network (MTCNN). And then, inputting the image corresponding to the target organism frame into a multi-layer recurrent neural network trained in advance to obtain the first detection score.
And 204, determining whether the target image is obtained by shooting the target organism by the shooting device or not based on the first detection score.
In some embodiments, the executing subject may determine whether the target image is obtained by the photographing apparatus photographing the target organism based on the first detection score. Wherein the table information includes a score criterion whether the target image is obtained by photographing the target organism by a photographing apparatus.
In some optional implementations of some embodiments, determining whether the target organism in the target image is the target object based on the first detection score may be: and determining that the target image is obtained by shooting the target organism by a shooting device in response to the first detection score not being smaller than the first threshold value.
As can be seen from the embodiments described above, first, it is determined whether the face pose of the target organism in the target image is normal based on the image acquired in advance. It can be ensured that the face pose in the above-described target image to be subjected to silent live body detection is normal. Then, in response to determining that the face pose is normal, it may be determined whether to perform silent live body detection on the target image. And responding to the target image to perform silent live body detection. Furthermore, the silent living body detection is carried out on the target object, and the obtained first detection score is used for accurately determining whether the target image is obtained by shooting the target living body by the shooting equipment. The face detection method can accurately and quickly determine whether the target image is obtained by shooting the target organism by the shooting equipment through the face posture of the target organism in the target image and the silence living body detection on the target image. Further, the safety of face detection of the target living body is improved laterally.
With continued reference to fig. 3, a flow 300 of further embodiments of a face detection method according to the present disclosure is shown. The face detection method comprises the following steps:
Step 301, determining whether the face pose of the target organism in the target image is normal or not based on the pre-acquired image.
Step 302, in response to determining that the face pose is normal, determining whether a silent live body detection needs to be performed on the target image.
Step 303, in response to determining that the silent liveness detection needs to be performed on the target image, performing the silent liveness detection on the target image to obtain a first detection score.
And a step 304 of determining whether the target image is obtained by the shooting device shooting the target organism or not based on the first detection score.
In some embodiments, the specific implementation and technical effects of steps 301 and 304 may refer to steps 201 and 204 in the embodiments corresponding to fig. 2, which are not described herein again.
And 305, in response to the fact that the face posture of the target organism in the target image is abnormal, performing silent living body detection on the target image to obtain a second detection score.
In some embodiments, in response to determining that the facial pose of the target organism in the target image is abnormal, the performing subject may perform silent live body detection on the target image, resulting in a second detection score.
As an example, in response to determining that the face pose of the target living body in the target image is abnormal, the executing subject may perform silent living body detection on the target image, and obtaining the second detection score may include:
in response to determining that the face pose of the target organism in the target image is abnormal, the executive body may input the target image into a pre-trained target organism face detection network to obtain a target organism face frame. Wherein, the target organism face detection network may be one of the following: SSD (Single Shot MultiBox Detector) algorithm, R-CNN (Region-conditional Neural Networks) algorithm, Fast R-CNN (Fast-conditional Neural Networks) algorithm, SPP-NET (SpatilPyramid Pooling network) algorithm, YOLO (you Only Look one) algorithm, FPN (feature Pyramid Networks) algorithm, DCN (Deformable ConvNet) algorithm, RetaintNet target detection algorithm.
And secondly, extracting the characteristics of the image corresponding to the face frame of the target organism. As an example, a Surf (Speeded Up Robust Features) algorithm network may be used to extract Features of the image corresponding to the face frame of the target organism.
And thirdly, inputting the extracted features into a pre-trained full-connection network to obtain the second detection score.
Step 306, in response to the second detection score being greater than or equal to a preset first threshold, determining a second global value based on the second detection score and the first global value.
In some embodiments, in response to the second detection score being greater than or equal to a predetermined first threshold, the execution subject may determine a second global value based on the detection score and the first global value. Wherein the images in the image set are acquired before the target image. The first global value is obtained based on a detection score set corresponding to the image set.
By way of example, as shown in FIG. 4, a schematic diagram of determining a second global value in some embodiments is shown. The set of images includes a first image 402 and a second image 405. After the first image 402 is subjected to silent living body detection, a detection score 403 corresponding to the first image is obtained. Here, the initial score 401 is a detection score corresponding to the image acquired in advance, and the initial score 401 may be set to 1. Further, the result of multiplying the initial score 401 by the first weight and the result of multiplying the detection score 403 corresponding to the first image by the second weight may be added to obtain the global value 404 corresponding to the first image. The result of adding the first weight to the second weight may be 1. The value of the detection score corresponding to the first image is greater than the preset first threshold value. After the second image 405 is subjected to silent living body detection, a detection score 406 corresponding to the second image is obtained. Further, a result of multiplying the global value 404 corresponding to the first image by the first weight and a result of multiplying the detection score 406 corresponding to the second image by the second weight may be added to obtain a global value 407 corresponding to the second image as the first global value. After the target image 408 is subjected to silent living body detection, a detection score 409 corresponding to the target image is obtained. Further, the second global value may be obtained by adding the result of multiplying the second image-corresponding global value 407 by the first weight to the result of multiplying the target image-corresponding detection score 409 by the second weight 410.
Step 307, in response to the second global value being less than or equal to a second predetermined threshold, determining that the target image is obtained by capturing the target organism by a capturing device.
In some embodiments, in response to the second global value being less than or equal to a second threshold value, the executing body may consider that the target image is obtained by capturing the target organism by a capturing device.
As shown in fig. 5, an application scene diagram 500 is shown in a case where the face pose of the target living body in the target image is abnormal.
As an example, the electronic device 501 may determine whether the face pose of the target organism in the target image 503 is normal through the image 502 acquired in advance. Then, in response to determining that the face pose of the target living body in the target image 503 is abnormal, silent living body detection is performed on the target image 503, and a second detection score 505 is obtained. As an example, the above target image 103 is input into the silent living body detection network 504, and the second detection score 505 is obtained. Then, in response to the second detection score 505 being greater than or equal to a first predetermined threshold, a second global value 507 is determined based on the second detection score 505 and the first global value 506. As an example, the second detection score 505 may be 0.8, the first threshold may be 0.5, and the first global value 506 may be 0.6. Optionally, if the second detection score 505 is greater than the first threshold, the result of multiplying the second detection score 505 by the first value plus the result of multiplying the first global value 506 by the second value is 0.72. The first value may be 0.6, and the second value may be 0.4. Finally, in response to the second global value 507 being less than or equal to a second threshold value set in advance, it is determined that the target image is obtained by shooting the target organism by a shooting device.
As can be seen from fig. 3, compared with the description of some embodiments corresponding to fig. 2, the flow 300 of detecting the face of the living body in some embodiments corresponding to fig. 3 further highlights the specific step of determining whether the target image is obtained by the photographing device for photographing the target living body when the facial pose of the target living body in the target image is abnormal. Therefore, the solutions described in the embodiments can limit the facial pose of the target organism more effectively and accurately by using the global value.
With continuing reference to fig. 6, as an implementation of the above-described method for the above-described figures, the present disclosure provides some embodiments of a face detection method, which correspond to those of the above-described method embodiments of fig. 2, and which may be applied in various electronic devices in particular.
As shown in fig. 6, a face detection method 600 of some embodiments includes: a first determination unit 601, a second determination unit 602, a detection unit 603, and a third determination unit 604. Wherein, the first determination unit 601 is configured to determine whether the face pose of the target organism in the target image is normal based on the pre-acquired image. A second determination unit 602 configured to determine whether or not silent live body detection is required for the target image in response to a determination that the face pose is normal. A detection unit 603 configured to perform silent live body detection on the target image to obtain a detection score in response to determining that silent live body detection is required on the target image. A third determining unit 604 configured to determine whether the target image is an image obtained by the photographing apparatus photographing the target organism based on the detection score.
In some optional implementations of some embodiments, the apparatus 600 may further include: a fourth determination unit, a fifth determination unit, and a sixth determination unit (not shown in the figure). Wherein the fourth determining unit may be configured to perform silent living body detection on the target image in response to determining that the face pose of the target living body in the target image is abnormal, resulting in a detection score. The fifth determination unit may be configured to determine, in response to the detection score being greater than or equal to a first threshold value set in advance, a second global value based on the detection score and a first global value determined based on a set of detection scores corresponding to a set of images acquired before the target image. The sixth determining unit may be configured to determine that the target image is obtained by photographing the target living body by the photographing apparatus in response to the second global value being less than or equal to a second threshold that is set in advance.
In some optional implementations of some embodiments, the second determining unit 602 may be further configured to: in response to determining that the facial pose is normal, determining whether a calibrated image exists in a set of detection images acquired before the target image, wherein the calibrated image is an image that has been detected by a silent living body; in response to determining that the calibrated image is included in the detection image set, determining that silent in-vivo detection is not required for the target image; and in response to determining that the calibrated image is included in the detection image set, determining that silent living body detection needs to be performed on the target image.
In some optional implementations of some embodiments, the second determining unit 602 may be further configured to: and performing other identification operations on the target image in response to determining that silent liveness detection on the target image is not required.
In some optional implementations of some embodiments, the third determining unit 604 may be further configured to: and determining that the target image is obtained by shooting the target organism by a shooting device in response to the detection score not being less than the first threshold value.
In some optional implementations of some embodiments, the pre-acquired image is a biological image in which facial features of a biological body in the image satisfy a predetermined condition. The first determining unit 603 may be further configured to: determining the facial pose of the target organism in the acquired image; determining a similarity between the facial pose of the target organism in the image and the facial pose of the target organism in the target image; and determining that the target image is obtained by shooting the target organism by a shooting device in response to the similarity being smaller than a preset third threshold value.
It will be understood that the elements described in the apparatus 600 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 600 and the units included therein, and are not described herein again.
Referring now to fig. 7, a block diagram of an electronic device (e.g., the electronic device of fig. 1) 700 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 7 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via communications means 709, or may be installed from storage 708, or may be installed from ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining whether the face posture of the target organism in the target image is normal or not based on the pre-acquired image; in response to determining that the face pose is normal, determining whether silent live body detection needs to be performed on the target image; responding to the determination that the target image needs to be subjected to silent living body detection, and performing silent living body detection on the target image to obtain a detection score; and determining whether the target image is obtained by shooting the target organism by the shooting equipment or not based on the detection score.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, which may be described as: a processor includes a first determination unit, a second determination unit, a detection unit, and a third determination unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the first determination unit may also be described as "a unit that determines whether the face pose of the target living body in the target image is normal based on the image acquired in advance".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
According to one or more embodiments of the present disclosure, there is provided a face detection method including: determining whether the face posture of the target organism in the target image is normal or not based on the pre-acquired image; in response to determining that the face pose is normal, determining whether silent live body detection is required for the target image; responding to the determination that the target image needs to be subjected to silent living body detection, and performing silent living body detection on the target image to obtain a detection score; and determining whether the target image is obtained by shooting the target organism by the shooting equipment or not based on the detection score.
According to one or more embodiments of the present disclosure, the method further includes: performing silent living body detection on the target image to obtain a detection score in response to determining that the face posture of the target organism in the target image is abnormal; in response to the detection score being greater than or equal to a preset first threshold, determining a second global value based on the detection score and a first global value, wherein the first global value is determined based on a detection score set corresponding to an image set acquired before the target image; and determining that the target image is obtained by shooting the target organism by the shooting device in response to the second global value being smaller than or equal to a preset second threshold value.
According to one or more embodiments of the present disclosure, the determining whether or not silent live body detection is required for the target image in response to determining that the face pose is normal includes: in response to determining that the facial pose is normal, determining whether a calibrated image exists in a set of detection images acquired before the target image, wherein the calibrated image is an image that has been detected by a silent living body; in response to determining that the calibrated image is included in the set of detected images, determining that silent liveness detection is not required for the target image; and in response to determining that the calibrated image is included in the detection image set, determining that silent living body detection needs to be performed on the target image.
According to one or more embodiments of the present disclosure, after determining that silent live-body detection is not required for the target image in response to determining that the calibrated image exists in the detection image set before the target image, the method further includes: and performing other identification operations on the target image in response to determining that silent living body detection on the target image is not required.
According to one or more embodiments of the present disclosure, the determining whether the target organism in the target image is the target object based on the detection score includes: and determining that the target image is obtained by shooting the target organism by a shooting device in response to the detection score not being smaller than the first threshold value.
According to one or more embodiments of the present disclosure, the pre-acquired image is a biological image in which a facial feature of a biological body in the image satisfies a predetermined condition; and the above-mentioned image based on acquireing in advance, confirm whether the facial gesture of the target organism is normal in the target image, including: determining the facial pose of the target organism in the acquired image; determining a similarity between the facial pose of the target organism in the image and the facial pose of the target organism in the target image; and determining that the target image is obtained by shooting the target organism by the shooting equipment in response to the similarity being smaller than a preset third threshold value.
According to one or more embodiments of the present disclosure, there is provided a face detection method including: a first determination unit configured to determine whether a face pose of a target organism in a target image is normal based on an image acquired in advance; a second determination unit configured to determine whether or not silent live body detection is required for the target image in response to a determination that the face pose is normal; a detection unit configured to perform silent live body detection on the target image to obtain a detection score in response to determining that silent live body detection is required on the target image; and a third determining unit configured to determine whether the target image is an image obtained by the photographing apparatus photographing the target organism, based on the detection score.
According to one or more embodiments of the present disclosure, an apparatus may further include: a fourth determination unit, a fifth determination unit, and a sixth determination unit (not shown in the figure). Wherein the fourth determination unit may be configured to perform silent live body detection on the target image, resulting in the detection score, in response to determining that the face pose of the target organism in the target image is abnormal. The fifth determination unit may be configured to determine, in response to the detection score being greater than or equal to a first threshold value set in advance, a second global value based on the detection score and a first global value determined based on a set of detection scores corresponding to a set of images acquired before the target image. The sixth determining unit may be configured to determine that the target image is obtained by photographing the target living body by the photographing apparatus in response to the second global value being less than or equal to a second threshold value set in advance.
According to one or more embodiments of the present disclosure, the second determining unit may be further configured to: in response to determining that the facial pose is normal, determining whether a calibrated image exists in a set of detection images acquired before the target image, wherein the calibrated image is an image that has been detected by a silent living body; in response to determining that the calibrated image is included in the set of detected images, determining that silent liveness detection is not required for the target image; and in response to determining that the calibrated image is included in the detection image set, determining that silent living body detection needs to be performed on the target image.
According to one or more embodiments of the present disclosure, the second determining unit may be further configured to: and performing other identification operations on the target image in response to determining that silent liveness detection on the target image is not required.
According to one or more embodiments of the present disclosure, the fourth determining unit may be further configured to: and determining that the target image is obtained by shooting the target organism by a shooting device in response to the detection score not being less than the first threshold value.
According to one or more embodiments of the present disclosure, the pre-acquired image is a biological body image in which a facial feature of a biological body in the image satisfies a predetermined condition. The first determination unit may be further configured to: determining the face pose of the target organism in the acquired image; determining a similarity between the facial pose of the target organism in the image and the facial pose of the target organism in the target image; and determining that the target image is obtained by shooting the target organism by the shooting equipment in response to the similarity being smaller than a preset third threshold value.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement a method as described in any of the embodiments above.
According to one or more embodiments of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, wherein the program, when executed by a processor, implements the method as described in any of the embodiments above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (8)

1. A face detection method, comprising:
determining whether the face posture of the target organism in the target image is normal or not based on the pre-acquired image;
in response to determining that the facial pose is normal, determining whether a silent liveness detection is required for the target image;
responding to the fact that the target image needs to be subjected to silent living body detection, and performing silent living body detection on the target image to obtain a first detection score;
determining whether the target image is obtained by shooting the target organism by a shooting device based on the first detection score;
in response to determining that the facial pose is normal, determining whether a calibrated image is present in a set of images acquired prior to the target image, wherein the calibrated image is an image that has been detected by a silent living body;
in response to determining that the calibrated images are included in the set of images, determining that silent liveness detection is not required for the target image;
in response to determining that the calibrated image is not included in the set of images, determining that a silent liveness detection of the target image is required.
2. The method of claim 1, wherein the method further comprises:
Performing silent living body detection on the target image to obtain a second detection score in response to determining that the face posture of the target organism in the target image is abnormal;
in response to the second detection score being greater than or equal to a preset first threshold, determining a second global value based on the second detection score and a first global value, wherein the first global value is determined based on a detection score set corresponding to an image set, and images in the image set are acquired before the target image;
and determining that the target image is obtained by shooting the target organism by the shooting device in response to the second global value being smaller than a preset second threshold value.
3. The method of claim 1, wherein after determining that silent liveness detection of the target image is not required in response to determining that the calibrated image is present in the set of detection images preceding the target image, the method further comprises:
performing other recognition operations on the target image in response to determining that silent liveness detection of the target image is not required.
4. The method of claim 1, wherein the determining whether the target image is obtained by a capture device capturing the target organism based on the first detection score comprises:
Determining that the target image is obtained by the photographing device photographing the target organism in response to the first detection score not being less than a third threshold.
5. The method according to claim 1, wherein the pre-acquired image is a biological body image in which facial features of a biological body in the image satisfy a preset condition; and
the determining whether the face pose of the target organism in the target image is normal based on the pre-acquired image comprises:
determining a facial pose of a target organism in the image;
determining a similarity between the facial pose of the target organism in the image and the facial pose of the target organism in the target image;
and determining that the face pose of the target organism in the target image is normal in response to the similarity being smaller than a preset fourth threshold.
6. A face detection apparatus comprising:
a first determination unit configured to determine whether a face pose of a target organism in a target image is normal based on an image acquired in advance;
a second determination unit configured to determine whether or not silent live body detection is required for the target image in response to a determination that the face pose is normal;
A detection unit configured to perform silent in-vivo detection on the target image in response to determining that the silent in-vivo detection on the target image is required, so as to obtain a first detection score;
a third determination unit configured to determine whether the target image is an image obtained by a photographing device photographing the target organism, based on the first detection score;
a fourth determination unit configured to determine whether a calibrated image exists in a set of images acquired before the target image in response to determining that the face pose is normal, wherein the calibrated image is an image that has been detected by a silent living body;
a fifth determining unit configured to determine that silent liveness detection of the target image is not required in response to determining that the calibrated image is included in the set of images;
a sixth determining unit configured to determine that a silent liveness detection of the target image is required in response to determining that the calibrated image is not included in the set of images.
7. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-5.
8. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-5.
CN202010746706.8A 2020-07-29 2020-07-29 Face detection method and device, electronic equipment and computer readable medium Active CN111898529B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010746706.8A CN111898529B (en) 2020-07-29 2020-07-29 Face detection method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010746706.8A CN111898529B (en) 2020-07-29 2020-07-29 Face detection method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN111898529A CN111898529A (en) 2020-11-06
CN111898529B true CN111898529B (en) 2022-07-19

Family

ID=73183716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010746706.8A Active CN111898529B (en) 2020-07-29 2020-07-29 Face detection method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN111898529B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255529A (en) * 2021-05-28 2021-08-13 支付宝(杭州)信息技术有限公司 Biological feature identification method, device and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106557726A (en) * 2015-09-25 2017-04-05 北京市商汤科技开发有限公司 A kind of band is mourned in silence the system for face identity authentication and its method of formula In vivo detection
CN109934191A (en) * 2019-03-20 2019-06-25 北京字节跳动网络技术有限公司 Information processing method and device
CN110119719A (en) * 2019-05-15 2019-08-13 深圳前海微众银行股份有限公司 Biopsy method, device, equipment and computer readable storage medium
CN110738142A (en) * 2019-09-26 2020-01-31 广州广电卓识智能科技有限公司 method, system and storage medium for self-adaptively improving face image acquisition

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271929B (en) * 2018-09-14 2020-08-04 北京字节跳动网络技术有限公司 Detection method and device
CN111325175A (en) * 2020-03-03 2020-06-23 北京三快在线科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106557726A (en) * 2015-09-25 2017-04-05 北京市商汤科技开发有限公司 A kind of band is mourned in silence the system for face identity authentication and its method of formula In vivo detection
CN109934191A (en) * 2019-03-20 2019-06-25 北京字节跳动网络技术有限公司 Information processing method and device
CN110119719A (en) * 2019-05-15 2019-08-13 深圳前海微众银行股份有限公司 Biopsy method, device, equipment and computer readable storage medium
CN110738142A (en) * 2019-09-26 2020-01-31 广州广电卓识智能科技有限公司 method, system and storage medium for self-adaptively improving face image acquisition

Also Published As

Publication number Publication date
CN111898529A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN108830235B (en) Method and apparatus for generating information
CN111368685B (en) Method and device for identifying key points, readable medium and electronic equipment
CN109829432B (en) Method and apparatus for generating information
CN109993150B (en) Method and device for identifying age
CN108337505B (en) Information acquisition method and device
CN108229375B (en) Method and device for detecting face image
CN110059624B (en) Method and apparatus for detecting living body
CN110059623B (en) Method and apparatus for generating information
CN110349161B (en) Image segmentation method, image segmentation device, electronic equipment and storage medium
CN111402122A (en) Image mapping processing method and device, readable medium and electronic equipment
CN108470131B (en) Method and device for generating prompt message
CN114898177B (en) Defect image generation method, model training method, device, medium and product
CN113033677A (en) Video classification method and device, electronic equipment and storage medium
CN111860071A (en) Method and device for identifying an item
CN111126159A (en) Method, apparatus, electronic device, and medium for tracking pedestrian in real time
CN111898529B (en) Face detection method and device, electronic equipment and computer readable medium
CN108921138B (en) Method and apparatus for generating information
CN110349108B (en) Method, apparatus, electronic device, and storage medium for processing image
CN110363132B (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN111783677A (en) Face recognition method, face recognition device, server and computer readable medium
CN110765304A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN109816791B (en) Method and apparatus for generating information
CN114882576B (en) Face recognition method, electronic device, computer-readable medium, and program product
CN113158773B (en) Training method and training device for living body detection model
CN112085733B (en) Image processing method, image processing device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder