CN111626163A - Human face living body detection method and device and computer equipment - Google Patents

Human face living body detection method and device and computer equipment Download PDF

Info

Publication number
CN111626163A
CN111626163A CN202010421136.5A CN202010421136A CN111626163A CN 111626163 A CN111626163 A CN 111626163A CN 202010421136 A CN202010421136 A CN 202010421136A CN 111626163 A CN111626163 A CN 111626163A
Authority
CN
China
Prior art keywords
image
face
determining
detected
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010421136.5A
Other languages
Chinese (zh)
Other versions
CN111626163B (en
Inventor
李永凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010421136.5A priority Critical patent/CN111626163B/en
Publication of CN111626163A publication Critical patent/CN111626163A/en
Application granted granted Critical
Publication of CN111626163B publication Critical patent/CN111626163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The application discloses a face in-vivo detection method, a face in-vivo detection device and computer equipment, which are used for solving the problem of low face in-vivo detection accuracy. The method comprises the following steps: receiving an image to be detected, and extracting human face and eye characteristics of the image to be detected to obtain an eye predetermined region image corresponding to eyes and a human face region image except the eye predetermined region image; determining first characteristic information corresponding to an iris region in an eye predetermined region image, and determining a first confidence corresponding to the first characteristic information; acquiring a preset number of face area subimage information of the face area image, and determining a second confidence coefficient corresponding to the preset number of face area subimage information; and determining the image to be detected as a living body face image or a non-living body face image according to the first confidence coefficient and the second confidence coefficient.

Description

Human face living body detection method and device and computer equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting a living human face, and a computer device.
Background
In recent years, with the continuous development of the artificial intelligence related technology, the face recognition technology has gradually become an identity authentication and recognition technology with wide application, which is applied to various scenes, such as security, finance and the like.
Specifically, when the existing face recognition technology is used for identity authentication, attacks on photos, videos or 3D models may be encountered, so that the accuracy of face recognition authentication is low. In view of this, in the prior art, a living body face detection method is proposed, but in the prior art, most of the face living body detection methods based on an optical flow method, an image texture feature method and a face micromotion analysis method need to extract more technical features and then perform corresponding judgment, which is time-consuming and large in calculation amount, and may not accurately detect a living body and a non-living body.
Therefore, the face in-vivo detection method in the prior art has the technical problem of poor accuracy.
Disclosure of Invention
The application provides a method and a device for detecting a living human face and computer equipment, which are used for solving the technical problem of poor accuracy of the living human face detection in the prior art. The technical scheme of the application is as follows:
in a first aspect, a method for detecting a living human face is provided, the method including:
receiving an image to be detected, and performing face and eye feature extraction processing on the image to be detected to obtain an eye predetermined region image corresponding to the eyes and remove a face region image outside the eye predetermined region image, wherein the image to be detected is an image which is acquired under a near-infrared scene and contains human face information;
determining first characteristic information corresponding to an iris area in the predetermined area image of the eye, and determining a first confidence corresponding to the first characteristic information; the first characteristic information is used for representing reflection characteristic information of pixels in the iris area, and the first confidence coefficient is used for representing a probability value of the reflection characteristic information of the pixels corresponding to the real eye of the human body;
acquiring a preset number of face area subimage information of the face area image, and determining a second confidence coefficient corresponding to the preset number of face area subimage information; the second confidence coefficient is used for representing the probability value that the preset number of face region subimage information is the real face information of the human body;
and determining the image to be detected as a living body face image or a non-living body face image according to the first confidence coefficient and the second confidence coefficient.
In a possible embodiment, determining the first feature information corresponding to the iris region in the predetermined region image of the eye includes:
determining an iris region in the predetermined region image of the eye;
scaling the iris area by a preset proportion to obtain a processed iris area;
acquiring pixel intensities of all pixels in the processed iris region, and determining an average pixel intensity value of the pixel intensities of all pixels;
and determining first characteristic information according to the pixel intensity of all the pixels and the average pixel intensity value.
In a possible implementation, determining the first feature information according to the pixel intensities of all the pixels and the average pixel intensity value includes:
determining the first characteristic information using the following formula:
Figure BDA0002496925380000021
wherein S ish,w,sFor characterizing first characteristic information, Ih,wFor characterizing the pixel intensity, I, of any one of the pixel intensities of all pixelsmeanFor characterizing the average pixel intensity value, h for characterizing the high coordinate of the pixel in the image, and w for characterizing the wide coordinate of the pixel in the image.
In a possible embodiment, determining a first confidence corresponding to the first feature information includes:
and inputting the first characteristic information into a support vector machine to obtain a first confidence corresponding to the first characteristic information.
In a possible implementation manner, determining the second confidence degree corresponding to the predetermined number of pieces of face region sub-image information includes:
and inputting the preset number of face region subimage information into a preset classifier based on a deep network to obtain a second confidence corresponding to the preset number of face region subimage information.
In a possible implementation manner, determining, according to the first confidence and the second confidence, that the image to be detected is a living face image or a non-living face image includes:
determining a first weight of the first feature information and a second weight of the predetermined number of sub-image information of the human face area, wherein the first weight is used for representing the proportion of eye features in the living features of the image to be detected, and the second weight is used for representing the proportion of face features in the living features of the image to be detected;
determining a first product value of the first weight and the first confidence coefficient and a second product value of the second weight and the second confidence coefficient, and adding the first product value and the second product value to obtain the living body confidence coefficient of the image to be detected;
and determining the image to be detected as a living body face image or a non-living body face according to the living body confidence coefficient and a preset rule of the image to be detected, wherein the preset rule is determining the image to be detected as a living body face head portrait or a non-living body face portrait according to the comparison result of the living body confidence coefficient and a preset threshold value.
In a possible implementation manner, determining that the image to be detected is a living body face image or a non-living body face image according to the living body confidence and the preset rule of the image to be detected includes:
if the living body confidence coefficient of the image to be detected is greater than or equal to a preset threshold value, determining that the image to be detected is a living body face image;
and if the living body confidence coefficient of the image to be detected is smaller than a preset threshold value, determining that the image to be detected is a non-living body face image.
In a second aspect, there is provided a living human face detection apparatus, the apparatus comprising:
the receiving module is used for receiving an image to be detected, extracting the characteristics of a human face and eyes of the image to be detected, obtaining an eye preset region image corresponding to the eyes and removing the human face region image except the eye preset region image, wherein the image to be detected is an image which is obtained under a near-infrared scene and contains human face information;
the first determining module is used for determining first characteristic information corresponding to an iris area in the predetermined area image of the eye and determining a first confidence corresponding to the first characteristic information; the first characteristic information is used for representing reflection characteristic information of pixels in the iris area, and the first confidence coefficient is used for representing a probability value of the reflection characteristic information of the pixels corresponding to the real eye of the human body;
the second determining module is used for acquiring the sub-image information of the face area with the preset number for the face area image and determining a second confidence coefficient corresponding to the sub-image information of the face area with the preset number; the second confidence coefficient is used for representing the probability value that the preset number of face region subimage information is the real face information of the human body;
and the processing module is used for determining that the image to be detected is a living body face image or a non-living body face image according to the first confidence coefficient and the second confidence coefficient.
In a possible implementation, the first determining module is configured to:
determining an iris region in the predetermined region image of the eye;
scaling the iris area by a preset proportion to obtain a processed iris area;
acquiring pixel intensities of all pixels in the processed iris region, and determining an average pixel intensity value of the pixel intensities of all pixels;
and determining first characteristic information according to the pixel intensity of all the pixels and the average pixel intensity value.
In a possible implementation, the first determining module is configured to:
determining the first characteristic information using the following formula:
Figure BDA0002496925380000041
wherein S ish,w,sFor characterizing first characteristic information, Ih,wFor characterizing the pixel intensity, I, of any one of the pixel intensities of all pixelsmeanFor characterizing the average pixel intensity value, h for characterizing the high coordinate of the pixel in the image, and w for characterizing the wide coordinate of the pixel in the image.
In a possible implementation, the first determining module is configured to:
and inputting the first characteristic information into a support vector machine to obtain a first confidence corresponding to the first characteristic information.
In a possible implementation, the first determining module is configured to:
and inputting the preset number of face region subimage information into a preset classifier based on a deep network to obtain a second confidence corresponding to the preset number of face region subimage information.
In a possible implementation, the processing module is configured to:
determining a first weight of the first feature information and a second weight of the predetermined number of sub-image information of the human face area, wherein the first weight is used for representing the proportion of eye features in the living features of the image to be detected, and the second weight is used for representing the proportion of face features in the living features of the image to be detected;
determining a first product value of the first weight and the first confidence coefficient and a second product value of the second weight and the second confidence coefficient, and adding the first product value and the second product value to obtain the living body confidence coefficient of the image to be detected;
and determining the image to be detected as a living body face image or a non-living body face according to the living body confidence coefficient and a preset rule of the image to be detected, wherein the preset rule is determining the image to be detected as a living body face head portrait or a non-living body face portrait according to the comparison result of the living body confidence coefficient and a preset threshold value.
In a possible implementation, the processing module is configured to:
if the living body confidence coefficient of the image to be detected is greater than or equal to a preset threshold value, determining that the image to be detected is a living body face image;
and if the living body confidence coefficient of the image to be detected is smaller than a preset threshold value, determining that the image to be detected is a non-living body face image.
In a third aspect, a computer device is provided, the computer device comprising:
a memory for storing program instructions;
a processor for calling the program instructions stored in the memory and executing the steps included in any of the methods of the first aspect according to the obtained program instructions.
In a fourth aspect, there is provided a storage medium having stored thereon computer-executable instructions for causing a computer device to perform the steps included in any one of the methods of the first aspect.
In a fifth aspect, a computer program product is provided, which, when run on a computer device, enables the computer device to perform the steps comprised in any of the methods of the first aspect.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
in the embodiment of the application, the face and eye feature extraction processing can be performed on the received image to be detected, so that the eye predetermined region image corresponding to the eyes and the face region image except the eye predetermined region image can be obtained, that is, the face region image and the eye predetermined region image are respectively obtained. Then, first characteristic information corresponding to an iris area in an image of a predetermined area of the eye can be determined, and since the received image to be detected is an image which is acquired under a near-infrared scene and contains information of a human face, reflection characteristic information of pixels in the iris area, namely the first characteristic information, can be determined. After the first characteristic information is obtained, a first confidence corresponding to the first characteristic information can be obtained, namely, a probability value of the reflection characteristic information of the pixel corresponding to the real eye of the human body can be obtained.
In addition, a predetermined number of face region subimages can be acquired for the face region image, then the information of the predetermined number of face region subimages can be determined, and a second confidence corresponding to the information of the predetermined number of face region subimages can be obtained, namely, a probability value that the information of the predetermined number of face region subimages is the real face information of the human body is obtained. In other words, in the embodiment of the application, the sub-images at different positions in the face region image are processed, so that the problem of low detection accuracy caused by the situation that a certain position of the face is shielded and the like can be solved, and the face features at different positions can be combined for processing, so that the living body detection of the face part can be more accurately performed.
Further, the image to be detected can be determined to be a living body face image or a non-living body face image according to the first confidence coefficient and the second confidence coefficient. Specifically, the image to be detected is determined to be a live face image or a non-live face image by performing comprehensive processing according to the obtained confidence degrees of the eye features and the face features. That is to say, the face in-vivo detection method provided by the embodiment of the application combines the dual features of the face and the eyes to perform face in-vivo detection, and can improve the accuracy of face in-vivo detection.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application.
FIG. 1 is a schematic diagram of an application scenario in an embodiment of the present application;
FIG. 2 is a flowchart of a face liveness detection method in an embodiment of the present application;
FIG. 3 is a schematic diagram of an image of a predetermined region of an eye in an embodiment of the present application;
FIG. 4 is a schematic view of an iris region in an image of a predetermined region of an eye in an embodiment of the present application;
FIG. 5 is a diagram illustrating a default deep web-based classifier in an embodiment of the present application;
FIG. 6 is a schematic diagram of acquiring sub-images of a face region in an embodiment of the present application;
fig. 7 is a block diagram of a living human face detection apparatus according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a computer device in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer device in the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
The terms "first" and "second" in the description and claims of the present application are used for distinguishing between different objects and not for describing a particular order. Furthermore, the term "comprises" and any variations thereof, which are intended to cover non-exclusive protection. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
At present, more and more scenes are used for identity authentication by face recognition, so that the safety and accuracy of face recognition authentication also become the focus of attention. Specifically, when the face recognition device in the prior art performs image acquisition on a target to be subjected to identity authentication, it is difficult to identify whether the acquired image is a face of a real person or a photograph printed on paper, or a picture displayed on a display screen, that is, whether the image is a live face image or a non-live person image cannot be accurately identified. Therefore, a living body face detection method is proposed in the prior art, but most of the living body face detection methods based on an optical flow method, an image texture feature method and a face micromotion analysis method adopted in the prior art need to extract more technical features and then perform corresponding judgment, which is time-consuming and large in calculation amount, and may not accurately detect a living body face image and a non-living body face image.
In view of this, the present application provides a face live detection method, by which a face region image and an eye region image in an image to be detected can be processed, so as to determine that the image to be detected is a live face image or a non-live face image according to a processing result.
After introducing the design concept of the embodiment of the present application, some brief descriptions are made below on application scenarios to which the technical scheme of face live detection in the embodiment of the present application is applicable, and it should be noted that the application scenarios described in the embodiment of the present application are for more clearly describing the technical scheme of the embodiment of the present application, and do not form limitations on the technical scheme provided in the embodiment of the present application.
In the embodiment of the present application, the technical scheme may be applied to any scene that needs to perform living human face detection, for example, a scene of access control living human face detection or a scene of payment for living human face detection, and the like.
In a specific implementation process, please refer to an application scene schematic diagram shown in fig. 1, where fig. 1 includes two parts, namely, a collection device and a computer device, for collecting a human face image, it should be noted that fig. 1 only shows that only one collection device and one computer device interact with each other as an example, in the specific implementation process, a plurality of collection devices may interact with one computer device, or a plurality of collection devices may interact with a plurality of computer devices. Specifically, the aforementioned capturing device may be an infrared camera, and the infrared camera may capture an image of an object in a near-infrared scene.
In a specific implementation process, the acquisition device and the computer device may be in communication connection through one or more networks. The network may be a wired network or a WIreless network, for example, the WIreless network may be a mobile cellular network, or may be a WIreless-Fidelity (WIFI) network, and of course, may also be other possible networks, which is not limited in this embodiment of the present application. In addition, if the present invention is applied to other application scenarios, other electronic devices, such as a door lock controller, a payment electronic device, and the like, may also be included between the aforementioned acquisition device and the computer device, and the other electronic devices, the acquisition device, and the computer device may also be in communication connection through one or more networks.
In the embodiment of the application, the collecting device can collect the image information which needs to be subjected to the living body face verification currently, then the collected image information is sent to the computer device, and then the computer device can carry out face living body detection processing on the image information sent by the collecting device, so that the image to be detected can be determined to be a living body face image or a non-living body face image.
In order to further explain the scheme of face live detection provided in the embodiments of the present application, details of the scheme are described below with reference to the accompanying drawings and the specific embodiments. Although the embodiments of the present application provide the method operation steps as shown in the following embodiments or figures, more or less operation steps may be included in the method based on the conventional or non-inventive labor. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application. The method can be executed in sequence or in parallel according to the method shown in the embodiment or the figures when the method is executed in an actual processing procedure or a device (for example, a parallel processor or an application environment of multi-thread processing).
The method for detecting a living human face in the embodiment of the present application is described below with reference to a flowchart of the method shown in fig. 2, and the steps shown in fig. 2 may be executed by a computer device shown in fig. 1. In an implementation, the computer device may be a server, such as a personal computer, a midrange computer, a cluster of computers, and so forth.
The technical scheme provided by the embodiment of the application is described in the following with the accompanying drawings of the specification.
Step 201: receiving an image to be detected, wherein the image to be detected is an image which is acquired under a near-infrared scene and contains human face information.
Step 202: and carrying out eye feature extraction processing on the image to be detected to obtain an eye preset region image corresponding to the eye.
Step 203: and carrying out face feature extraction processing on the image to be detected to obtain a face region image except the predetermined eye region image.
Step 204: first characteristic information corresponding to an iris area in the predetermined area image of the eye is determined, and a first confidence corresponding to the first characteristic information is determined.
Step 205: and acquiring a preset number of face region subimage information of the face region image, and determining a second confidence corresponding to the preset number of face region subimage information.
Step 206: and determining the image to be detected as a living body face image or a non-living body face image according to the first confidence coefficient and the second confidence coefficient.
In this embodiment of the application, as described above, the computer device may receive the image to be detected acquired by the acquisition device, and then may perform the eye feature extraction processing on the image to be detected, so as to obtain the image of the predetermined eye region corresponding to the eye. And the face feature extraction processing can be carried out on the image to be detected, so that a face region image except the eye region can be obtained.
Referring to fig. 3, fig. 3 is a schematic view of an image of a predetermined eye area shown in an embodiment of the present application, and specifically, after performing an eye feature extraction process on an image to be detected, a computer device may obtain an image of the predetermined eye area corresponding to an eye. Fig. 3(a) is an image of an eye region of a living human face acquired in a near-infrared scene, and fig. 3(b) is an image of an eye region of a non-living human face acquired in a near-infrared scene. Specifically, the non-living human face is, for example, an electronic photograph including face information, a picture including face information, or the like.
In the embodiment of the present application, it is considered that human eyes of living bodies have a specular reflection characteristic of an iris to illumination, and therefore, a relatively obvious bright reflection point appears in an iris area in an acquired image, so that the human eyes can be used as supplementary information for detecting living bodies of human faces according to the characteristic. Specifically, after the predetermined eye area image is determined, first feature information corresponding to an iris area in the predetermined eye area image can be further determined, wherein the first feature information is used for representing reflection feature information of pixels in the iris area.
In a specific implementation process, the iris region in the predetermined eye region image may be determined, specifically, please refer to fig. 4, where fig. 4 is a schematic diagram of the iris region in the predetermined eye region image in the embodiment of the present application. After the iris region is determined, a preset scaling process may be performed on the iris region, so that the processed iris region may be obtained, for example, the iris region may be scaled to 50 pixels in width and height. The pixel intensities of all pixels in the processed iris region may then be obtained and an average pixel intensity value of the pixel intensities of all pixels may be determined, such that the first characteristic information may be determined from the pixel intensities and the average pixel intensity value of all pixels.
In a specific implementation, the following formula may be used to determine the first characteristic information:
Figure BDA0002496925380000111
wherein S ish,w,sFor characterizing first characteristic information, Ih,wFor characterizing the pixel intensity, I, of any one of the pixel intensities of all pixelsmeanFor characterizing the average pixel intensity value, h for characterizing the high coordinate of the pixel in the image, and w for characterizing the wide coordinate of the pixel in the image.
In this embodiment of the application, when the value of the pixel intensity of a certain point is smaller than the average pixel intensity value, the feature value corresponding to the first feature information is 0, and when the value of the pixel intensity of a certain point is greater than or equal to the average pixel intensity value, the feature value corresponding to the first feature information may pass through
Figure BDA0002496925380000112
And performing calculation so as to determine first characteristic information corresponding to the iris area.
In the embodiment of the application, after the first feature information corresponding to the iris region is obtained, the first feature information may be input into a corresponding processing depth network, for example, a support vector machine, so that a first confidence corresponding to the first feature information may be obtained, where the first confidence is used to characterize a probability value that the first feature information is reflection feature information of a pixel corresponding to a real eye of a human body. That is to say, in the face live detection provided in the embodiment of the present application, the detection of the live body in the eye area of the human body is realized by using the specular reflection feature of the eye, and in such a detection manner, the non-live eye and the live eye can be detected more accurately.
In the embodiment of the application, when the characteristic extraction is carried out on the image of the predetermined area of the eye, the reflection characteristic of the iris area of the living eye is adopted, instead of the processing mode of directly using the characteristic extracted by the convolutional neural network in the related technology, namely, the processing mode in the application can describe the living characteristics of the eye in more detail and accurately, the calculated amount is reduced, a large number of samples are not needed for training, the method can adapt to different scenes, and the generalization capability is stronger.
In the embodiment of the application, random subimage collection can be performed on the determined face region image, that is, textural feature images of different positions of the face can be collected in the application. Specifically, a predetermined number of pieces of face region sub-image information may be acquired, where the predetermined number may be 1, 3, or 5, and this embodiment of the present application is not limited. For example, referring to fig. 5, 3 pieces of face region sub-image information may be acquired in the face region image. It should be noted that, for convenience of corresponding description, an image corresponding to the face region sub-image information may be referred to as a face region sub-image.
In a specific implementation process, the preset number can be determined by combining actual requirements, for example, if the face of the acquired user A image is small and the proportion of eyes relative to facial features is large, 1 piece of sub-image information of a face area can be acquired; or the proportion of the eyes of the acquired user B image relative to the facial features is small, and 2 pieces of sub-image information of the face area can be acquired; or, the face of the user C image is partially occluded, and sub-image information of 3 face regions may be acquired.
In the embodiment of the present application, the sizes of the face area sub-images may be the same, for example, the face area sub-images may be 96 pixels wide and high; for example, the width and height of one face region sub-image are both 96 pixels, and the width and height of another face region sub-image are both 70 pixels, which is not limited in the embodiment of the present application. It should be noted that, in the embodiment of the present application, the acquired face region sub-images are not overlapped.
In the embodiment of the application, when the feature of the face region image is extracted, a method for randomly acquiring the subimages is used, instead of a method for directly zooming the whole face image in the related technology, so that the loss of detail texture information of the face image can be avoided, that is, the method has stronger description capability in the detection of living bodies and non-living bodies, and the detection accuracy of the living body face is stronger.
In the embodiment of the application, after the predetermined number of face region subimage information is determined, the predetermined number of face region subimage information may be input into a preset depth network-based classifier, so that a second confidence corresponding to the predetermined number of face region subimage information may be obtained, where the second confidence is used to represent a probability value that the predetermined number of face region subimage information is the real face information of a human body. In the embodiment of the application, the preset deep network-based classifier is a network structure for classifying texture features in the sub-image information of the face region, and the training data of the preset deep network-based classifier is sub-images randomly selected and deducted from faces of living bodies and faces of non-living bodies in a pre-collected near-infrared scene, so that the second confidence coefficient of the sub-image information of the face region can be accurately determined.
For example, referring to fig. 6, a face region sub-image with a width and a height of 96 pixels is collected, and then the sub-image is input into a preset depth network-based classifier, so that a second confidence level, i.e., Livescore in fig. 6, can be obtained.
In the embodiment of the application, after the first confidence degree and the second confidence degree are obtained, a first weight of the first feature information and a second weight of the predetermined number of face region subimage information may also be determined. The first weight is used for representing the proportion of eye features in the living features of the image to be detected, and the second weight is used for representing the proportion of face features in the living features of the image to be detected.
In a specific implementation process, the first weight may be set to be greater than the second weight, for example, the first weight is 0.6, and the second weight is 0.4; the first weight may also be set to be smaller than the second weight, and the specific setting mode may be set in combination with an actual implementation situation, which is not limited in the embodiment of the application. For example, if the first confidence corresponding to the first feature information determined according to the image to be detected is higher than a predetermined value (e.g., 0.5), the first weight may be set to be greater than the second weight.
Further, after the first weight and the second weight are determined, a first product value of the first weight and the first confidence and a second product value of the second weight and the second confidence may also be determined, and the first product value and the second product value are added to obtain the living body confidence of the image to be detected.
In the embodiment of the application, the image to be detected can be determined to be a living body face image or a non-living body face according to the living body confidence coefficient of the image to be detected and a preset rule, wherein the preset rule is that the image to be detected is determined to be a living body face head portrait or a non-living body face portrait according to the comparison result of the living body confidence coefficient and a preset threshold value. Specifically, if the living body confidence of the image to be detected is greater than or equal to a preset threshold, determining that the image to be detected is a living body face image; and if the living body confidence coefficient of the image to be detected is smaller than a preset threshold value, determining that the image to be detected is a non-living body face image. The preset threshold value may be determined according to actual implementation conditions.
For example, if the first confidence of the image to be detected is 0.7, the first weight is 0.7, the second confidence is 0.65, the second weight is 0.3, and the preset threshold is 0.6, the living body confidence may be determined to be 0.7 × 0.7+0.65 × 0.3 — 0.685, that is, the living body confidence is greater than the preset threshold, so that the image to be detected may be determined to be a living body face image.
In the embodiment of the application, the infrared camera is used when image information is collected, so that clearer face texture information and reflection characteristic information of eyes can be obtained. Then, when the acquired image to be detected is processed, different processing is respectively performed on the eyes and the face, specifically, reflection feature information corresponding to all pixels in an iris area is acquired for an image of a predetermined eye area, and a sub-image of a face area is randomly acquired for the face. Further, a first confidence coefficient and a second confidence coefficient corresponding to information obtained by processing eyes and faces in different ways are determined, so that the first confidence coefficient and the second confidence coefficient are fused to determine the living body confidence coefficient of the image to be detected, and whether the image to be detected is a living body face image is determined according to a preset rule.
The face in-vivo detection method provided by the embodiment of the application can be applied to different scenes, does not need a large number of samples for training, has strong generalization capability, can be used for carrying out face in-vivo detection according to the reflection characteristics of the eyes of the in-vivo and the texture information of different positions of the face, can realize the face in-vivo detection more accurately, and improves the use experience.
Based on the same inventive concept, the embodiment of the application provides a face in-vivo detection device, and the face in-vivo detection device can realize the corresponding functions of the face in-vivo detection method. The human face living body detection device can be a hardware structure, a software module or a hardware structure and a software module. The human face living body detection device can be realized by a chip system, and the chip system can be formed by a chip and can also comprise the chip and other discrete devices. Referring to fig. 7, the living human face detecting apparatus includes a receiving module 701, a first determining module 702, a second determining module 703, and a processing module 704. Wherein:
the receiving module 701 is configured to receive an image to be detected, perform face and eye feature extraction processing on the image to be detected, obtain an eye predetermined region image corresponding to the eyes and remove a face region image outside the eye predetermined region image, where the image to be detected is an image including human face information acquired in a near-infrared scene;
a first determining module 702, configured to determine first feature information corresponding to an iris region in the predetermined region image of the eye, and determine a first confidence corresponding to the first feature information; the first characteristic information is used for representing reflection characteristic information of pixels in the iris area, and the first confidence coefficient is used for representing a probability value of the reflection characteristic information of the pixels corresponding to the real eye of the human body;
a second determining module 703, configured to acquire a predetermined number of pieces of face region subimage information from the face region image, and determine a second confidence corresponding to the predetermined number of pieces of face region subimage information; the second confidence coefficient is used for representing the probability value that the preset number of face region subimage information is the real face information of the human body;
and the processing module 704 is configured to determine that the image to be detected is a living body face image or a non-living body face image according to the first confidence and the second confidence.
In a possible implementation, the first determining module 702 is configured to:
determining an iris region in the predetermined region image of the eye;
scaling the iris area by a preset proportion to obtain a processed iris area;
acquiring pixel intensities of all pixels in the processed iris region, and determining an average pixel intensity value of the pixel intensities of all pixels;
and determining first characteristic information according to the pixel intensity of all the pixels and the average pixel intensity value.
In a possible implementation, the first determining module 702 is configured to:
determining the first characteristic information using the following formula:
Figure BDA0002496925380000161
wherein S ish,w,sFor characterizing first characteristic information, Ih,wFor characterizing the pixel intensity, I, of any one of the pixel intensities of all pixelsmeanFor characterizing the average pixel intensity value, h for characterizing the high coordinate of the pixel in the image, and w for characterizing the wide coordinate of the pixel in the image.
In a possible implementation, the first determining module 702 is configured to:
and inputting the first characteristic information into a support vector machine to obtain a first confidence corresponding to the first characteristic information.
In a possible implementation, the first determining module 702 is configured to:
and inputting the preset number of face region subimage information into a preset classifier based on a deep network to obtain a second confidence corresponding to the preset number of face region subimage information.
In a possible implementation, the processing module 704 is configured to:
determining a first weight of the first feature information and a second weight of the predetermined number of sub-image information of the human face area, wherein the first weight is used for representing the proportion of eye features in the living features of the image to be detected, and the second weight is used for representing the proportion of face features in the living features of the image to be detected;
determining a first product value of the first weight and the first confidence coefficient and a second product value of the second weight and the second confidence coefficient, and adding the first product value and the second product value to obtain the living body confidence coefficient of the image to be detected;
and determining the image to be detected as a living body face image or a non-living body face according to the living body confidence coefficient and a preset rule of the image to be detected, wherein the preset rule is determining the image to be detected as a living body face head portrait or a non-living body face portrait according to the comparison result of the living body confidence coefficient and a preset threshold value.
In a possible implementation, the processing module 704 is configured to:
if the living body confidence coefficient of the image to be detected is greater than or equal to a preset threshold value, determining that the image to be detected is a living body face image;
and if the living body confidence coefficient of the image to be detected is smaller than a preset threshold value, determining that the image to be detected is a non-living body face image.
All relevant contents of the aforementioned steps related to the embodiment of the face live-body detection method as shown in fig. 2 may be cited to the functional description of the functional module corresponding to the face live-body detection device in the embodiment of the present application, and are not described herein again.
The division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation, and in addition, each functional unit in each embodiment of the present application may be integrated in one processor, may also exist alone physically, or may also be integrated in one unit by two or more units. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Based on the same inventive concept, an embodiment of the present application further provides a computer device, as shown in fig. 8, the computer device in the embodiment of the present application includes at least one processor 801, a memory 802 and a communication interface 803, which are connected to the at least one processor 801, a specific connection medium between the processor 801 and the memory 802 is not limited in the embodiment of the present application, a connection between the processor 801 and the memory 802 through a bus 800 is taken as an example in fig. 8, the bus 800 is shown in a thick line in fig. 8, and a connection manner between other components is only schematically illustrated and is not limited. The bus 800 may be divided into an address bus, a data bus, a control bus, etc., and is shown in fig. 8 with only one thick line for ease of illustration, but does not represent only one bus or type of bus.
In the embodiment of the present application, the memory 802 stores instructions executable by the at least one processor 801, and the at least one processor 801 may execute the steps included in the foregoing living human face detection method by executing the instructions stored in the memory 802.
The processor 801 is a control center of the computer device, and may connect various parts of the whole computer device by using various interfaces and lines, and perform various functions of the computer device and process data by operating or executing instructions stored in the memory 802 and calling data stored in the memory 802, thereby performing overall monitoring of the computer device. Optionally, the processor 801 may include one or more processing units, and the processor 801 may integrate an application processor and a modem processor, wherein the processor 801 mainly processes an operating system, a user interface, an application program, and the like, and the modem processor mainly processes wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 801. In some embodiments, the processor 801 and the memory 802 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 801 may be a general-purpose processor, such as a Central Processing Unit (CPU), digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like, that may implement or perform the methods, steps, and logic blocks of the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method provided in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Memory 802, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 802 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charged Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory 802 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 802 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data. The communication interface 803 is a transmission interface that can be used for communication, and can receive data or transmit data via the communication interface 803.
With reference to the further structural schematic diagram of the computer apparatus shown in fig. 9, the computer apparatus also includes a basic input/output system (I/O system) 901 for facilitating information transfer between the various devices within the computer apparatus, and a mass storage device 905 for storing an operating system 902, application programs 903, and other program modules 904.
The basic input/output system 901 comprises a display 906 for displaying information and an input device 907, such as a mouse, keyboard, etc., for a user to input information. Wherein a display 906 and an input device 907 are connected to the processor 801 through a basic input/output system 901 connected to the system bus 800. The basic input/output system 901 may also include an input/output controller for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, an input-output controller may also provide output to a display screen, a printer, or other type of output device.
The mass storage device 905 is connected to the processor 801 through a mass storage controller (not shown) connected to the system bus 800. The mass storage device 905 and its associated computer-readable media provide non-volatile storage for the server package. That is, the mass storage device 905 may include a computer-readable medium (not shown), such as a hard disk or CD-ROM drive.
According to various embodiments of the present application, the computing device package may also be operated by a remote computer connected to the network through a network, such as the Internet. That is, the computing device may be connected to the network 908 via the communication interface 803 attached to the system bus 800, or may be connected to another type of network or remote computer system (not shown) using the communication interface 803.
In an exemplary embodiment, there is also provided a storage medium comprising instructions, such as a memory 802 comprising instructions, executable by a processor 801 of an apparatus to perform the method described above. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In some possible embodiments, the aspects of the living human face detection method provided by the present application may also be implemented in the form of a program product, which includes program code for causing a computer device to perform the steps in the living human face detection method according to various exemplary embodiments of the present application described above in this specification when the program product runs on the computer device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A face in-vivo detection method is characterized by comprising the following steps:
receiving an image to be detected, and performing face and eye feature extraction processing on the image to be detected to obtain an eye predetermined region image corresponding to the eyes and remove a face region image outside the eye predetermined region image, wherein the image to be detected is an image which is acquired under a near-infrared scene and contains human face information;
determining first characteristic information corresponding to an iris area in the predetermined area image of the eye, and determining a first confidence corresponding to the first characteristic information; the first characteristic information is used for representing reflection characteristic information of pixels in the iris area, and the first confidence coefficient is used for representing a probability value of the reflection characteristic information of the pixels corresponding to the real eye of the human body;
acquiring a preset number of face area subimage information of the face area image, and determining a second confidence coefficient corresponding to the preset number of face area subimage information; the second confidence coefficient is used for representing the probability value that the preset number of face region subimage information is the real face information of the human body;
and determining the image to be detected as a living body face image or a non-living body face image according to the first confidence coefficient and the second confidence coefficient.
2. The method of claim 1, wherein determining first feature information corresponding to an iris region in the image of the predetermined area of the eye comprises:
determining an iris region in the predetermined region image of the eye;
scaling the iris area by a preset proportion to obtain a processed iris area;
acquiring pixel intensities of all pixels in the processed iris region, and determining an average pixel intensity value of the pixel intensities of all pixels;
and determining first characteristic information according to the pixel intensity of all the pixels and the average pixel intensity value.
3. The method of claim 2, wherein determining first feature information based on the pixel intensities of all the pixels and the average pixel intensity value comprises:
determining the first characteristic information using the following formula:
Figure FDA0002496925370000021
wherein S ish,w,sFor characterizing first characteristic information, Ih,wFor characterizing the pixel intensity, I, of any one of the pixel intensities of all pixelsmeanFor characterizing the average pixel intensity value, h for characterizing the high coordinate of the pixel in the image, and w for characterizing the wide coordinate of the pixel in the image.
4. The method of any of claims 1-3, wherein determining a first confidence level for the first feature information comprises:
and inputting the first characteristic information into a support vector machine to obtain a first confidence corresponding to the first characteristic information.
5. The method of any of claims 1-3, wherein determining a second confidence level for the predetermined number of face region sub-image information comprises:
and inputting the preset number of face region subimage information into a preset classifier based on a deep network to obtain a second confidence corresponding to the preset number of face region subimage information.
6. The method of claim 1, wherein determining whether the image to be detected is a live face image or a non-live face image according to the first confidence level and the second confidence level comprises:
determining a first weight of the first feature information and a second weight of the predetermined number of sub-image information of the human face area, wherein the first weight is used for representing the proportion of eye features in the living features of the image to be detected, and the second weight is used for representing the proportion of face features in the living features of the image to be detected;
determining a first product value of the first weight and the first confidence coefficient and a second product value of the second weight and the second confidence coefficient, and adding the first product value and the second product value to obtain the living body confidence coefficient of the image to be detected;
and determining the image to be detected as a living body face image or a non-living body face according to the living body confidence coefficient and a preset rule of the image to be detected, wherein the preset rule is determining the image to be detected as a living body face head portrait or a non-living body face portrait according to the comparison result of the living body confidence coefficient and a preset threshold value.
7. The method of claim 6, wherein determining whether the image to be detected is a live face image or a non-live face image according to the live confidence and the preset rules of the image to be detected comprises:
if the living body confidence coefficient of the image to be detected is greater than or equal to a preset threshold value, determining that the image to be detected is a living body face image;
and if the living body confidence coefficient of the image to be detected is smaller than a preset threshold value, determining that the image to be detected is a non-living body face image.
8. A living human face detection apparatus, comprising:
the receiving module is used for receiving an image to be detected, extracting the characteristics of a human face and eyes of the image to be detected, obtaining an eye preset region image corresponding to the eyes and removing the human face region image except the eye preset region image, wherein the image to be detected is an image which is obtained under a near-infrared scene and contains human face information;
the first determining module is used for determining first characteristic information corresponding to an iris area in the predetermined area image of the eye and determining a first confidence corresponding to the first characteristic information according to the first characteristic information; the first characteristic information is used for representing reflection characteristic information of pixels in the iris area, and the first confidence coefficient is used for representing a probability value of the reflection characteristic information of the pixels corresponding to the real eye of the human body;
the second determining module is used for acquiring the sub-image information of the face area with the preset number for the face area image and determining a second confidence coefficient corresponding to the sub-image information of the face area with the preset number; the second confidence coefficient is used for representing the probability value that the preset number of face region subimage information is the real face information of the human body;
and the processing module is used for determining that the image to be detected is a living body face image or a non-living body face image according to the first confidence coefficient and the second confidence coefficient.
9. A computer device, characterized in that the computer device comprises:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory and for executing the steps comprised in the method of any one of claims 1 to 7 in accordance with the obtained program instructions.
10. A storage medium storing computer-executable instructions for causing a computer to perform the steps comprising the method of any one of claims 1-7.
CN202010421136.5A 2020-05-18 2020-05-18 Human face living body detection method and device and computer equipment Active CN111626163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010421136.5A CN111626163B (en) 2020-05-18 2020-05-18 Human face living body detection method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010421136.5A CN111626163B (en) 2020-05-18 2020-05-18 Human face living body detection method and device and computer equipment

Publications (2)

Publication Number Publication Date
CN111626163A true CN111626163A (en) 2020-09-04
CN111626163B CN111626163B (en) 2023-04-07

Family

ID=72258941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010421136.5A Active CN111626163B (en) 2020-05-18 2020-05-18 Human face living body detection method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN111626163B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836625A (en) * 2021-01-29 2021-05-25 汉王科技股份有限公司 Face living body detection method and device and electronic equipment
CN112883940A (en) * 2021-04-13 2021-06-01 深圳市赛为智能股份有限公司 Silent in-vivo detection method, silent in-vivo detection device, computer equipment and storage medium
CN113255516A (en) * 2021-05-24 2021-08-13 展讯通信(天津)有限公司 Living body detection method and device and electronic equipment
CN113421317A (en) * 2021-06-10 2021-09-21 浙江大华技术股份有限公司 Method and system for generating image and electronic equipment
CN116311553A (en) * 2023-05-17 2023-06-23 武汉利楚商务服务有限公司 Human face living body detection method and device applied to semi-occlusion image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5291560A (en) * 1991-07-15 1994-03-01 Iri Scan Incorporated Biometric personal identification system based on iris analysis
CN103337088A (en) * 2013-07-10 2013-10-02 北京航空航天大学 Human face image light and shadow editing method based on edge preserving
CN103955717A (en) * 2014-05-13 2014-07-30 第三眼(天津)生物识别科技有限公司 Iris activity detecting method
CN105389553A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Living body detection method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5291560A (en) * 1991-07-15 1994-03-01 Iri Scan Incorporated Biometric personal identification system based on iris analysis
CN103337088A (en) * 2013-07-10 2013-10-02 北京航空航天大学 Human face image light and shadow editing method based on edge preserving
CN103955717A (en) * 2014-05-13 2014-07-30 第三眼(天津)生物识别科技有限公司 Iris activity detecting method
CN105389553A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Living body detection method and apparatus

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836625A (en) * 2021-01-29 2021-05-25 汉王科技股份有限公司 Face living body detection method and device and electronic equipment
CN112883940A (en) * 2021-04-13 2021-06-01 深圳市赛为智能股份有限公司 Silent in-vivo detection method, silent in-vivo detection device, computer equipment and storage medium
CN113255516A (en) * 2021-05-24 2021-08-13 展讯通信(天津)有限公司 Living body detection method and device and electronic equipment
CN113421317A (en) * 2021-06-10 2021-09-21 浙江大华技术股份有限公司 Method and system for generating image and electronic equipment
CN113421317B (en) * 2021-06-10 2023-04-18 浙江大华技术股份有限公司 Method and system for generating image and electronic equipment
CN116311553A (en) * 2023-05-17 2023-06-23 武汉利楚商务服务有限公司 Human face living body detection method and device applied to semi-occlusion image
CN116311553B (en) * 2023-05-17 2023-08-15 武汉利楚商务服务有限公司 Human face living body detection method and device applied to semi-occlusion image

Also Published As

Publication number Publication date
CN111626163B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111626163B (en) Human face living body detection method and device and computer equipment
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN105893920B (en) Face living body detection method and device
CN110060237B (en) Fault detection method, device, equipment and system
US10635946B2 (en) Eyeglass positioning method, apparatus and storage medium
US8792722B2 (en) Hand gesture detection
US8750573B2 (en) Hand gesture detection
US10534957B2 (en) Eyeball movement analysis method and device, and storage medium
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
CN108875540B (en) Image processing method, device and system and storage medium
CN109815843B (en) Image processing method and related product
CN109086718A (en) Biopsy method, device, computer equipment and storage medium
CN108875542B (en) Face recognition method, device and system and computer storage medium
CN108875535B (en) Image detection method, device and system and storage medium
CN107844742B (en) Facial image glasses minimizing technology, device and storage medium
CN109816694B (en) Target tracking method and device and electronic equipment
CN111008935B (en) Face image enhancement method, device, system and storage medium
US11727707B2 (en) Automatic image capture system based on a determination and verification of a physical object size in a captured image
US20240013572A1 (en) Method for face detection, terminal device and non-transitory computer-readable storage medium
CN114155365A (en) Model training method, image processing method and related device
CN115482523A (en) Small object target detection method and system of lightweight multi-scale attention mechanism
CN108875501B (en) Human body attribute identification method, device, system and storage medium
CN113570615A (en) Image processing method based on deep learning, electronic equipment and storage medium
CN108875467B (en) Living body detection method, living body detection device and computer storage medium
CN116311395B (en) Fingerprint identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant