CN111126366A - Method, device, equipment and storage medium for distinguishing living human face - Google Patents

Method, device, equipment and storage medium for distinguishing living human face Download PDF

Info

Publication number
CN111126366A
CN111126366A CN202010248023.XA CN202010248023A CN111126366A CN 111126366 A CN111126366 A CN 111126366A CN 202010248023 A CN202010248023 A CN 202010248023A CN 111126366 A CN111126366 A CN 111126366A
Authority
CN
China
Prior art keywords
living body
face
model
discrimination
body face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010248023.XA
Other languages
Chinese (zh)
Other versions
CN111126366B (en
Inventor
谭明奎
梁创闰
李代远
刘璟
谢政
吴希贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan GMax Intelligent Technology Co ltd
Original Assignee
Hunan GMax Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan GMax Intelligent Technology Co ltd filed Critical Hunan GMax Intelligent Technology Co ltd
Priority to CN202010248023.XA priority Critical patent/CN111126366B/en
Publication of CN111126366A publication Critical patent/CN111126366A/en
Application granted granted Critical
Publication of CN111126366B publication Critical patent/CN111126366B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The application discloses a method for distinguishing a living human face, which comprises the following steps: training a first living body face discrimination model based on a near infrared image and a second living body face discrimination model based on a visible light image in advance; when a face image to be recognized exists, respectively carrying out living body discrimination on the face image to be recognized by utilizing a first living body face discrimination model and a second living body face discrimination model to obtain a first discrimination result and a second discrimination result; and determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy. Therefore, the method can avoid video attack and improve the discrimination accuracy; the living body identification method and the living body identification device can save identification time, improve convenience of a user in living body identification and improve use experience of the user. The application also discloses a device, equipment and a computer readable storage medium for distinguishing the faces of the living bodies, and the device, the equipment and the computer readable storage medium have the beneficial effects.

Description

Method, device, equipment and storage medium for distinguishing living human face
Technical Field
The present invention relates to the field of image processing, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for discriminating a human face.
Background
In recent years, face recognition technology has been widely applied to the fields of security, access control, payment and the like. Meanwhile, in order to ensure the safety and reliability of the face recognition technology, it is necessary to ensure that the face captured by the face recognition system is from the legal user, i.e. the living face, rather than the deceptive media such as face photos or videos. In the prior art, generally, in a visible light scene, a user is required to perform corresponding actions according to a system instruction of a face recognition system to perform living body judgment. Therefore, the method in the prior art still suffers from video attack, so that the judgment accuracy is low, and the use experience of the user is reduced due to the fact that the high cooperation of the user is needed and the time consumption is long.
Therefore, how to more efficiently judge the living human face and improve the user experience on the basis of improving the accuracy of judging the living human face is a technical problem which needs to be solved by technical personnel in the field at present.
Disclosure of Invention
In view of the above, the present invention aims to provide a method for discriminating a living human face, which can more efficiently discriminate the living human face on the basis of improving the accuracy of discriminating the living human face, and improve the user experience; another object of the present invention is to provide an apparatus, a device and a computer-readable storage medium for discriminating a human face, all of which have the above advantages.
In order to solve the above technical problem, the present invention provides a method for discriminating a human face, including:
training a first living body face discrimination model based on a near infrared image and a second living body face discrimination model based on a visible light image in advance;
when a face image to be recognized exists, respectively carrying out living body discrimination on the face image to be recognized by utilizing the first living body face discrimination model and the second living body face discrimination model to obtain a first discrimination result and a second discrimination result;
and determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy.
Preferably, after the pre-training of the first living body face discrimination model based on the near-infrared image and the second living body face discrimination model based on the visible light image, the method further includes:
obtaining a corresponding quantization scale through analog quantization training;
and respectively carrying out quantization compression on the first living body face discrimination model and the second living body face discrimination model according to the quantization scale, and updating the first living body face discrimination model and the second living body face discrimination model by using the quantized and compressed models.
Preferably, the process of obtaining the corresponding quantization scale through the simulation quantization training specifically includes:
determining the distribution of the activation value of each network layer in the first living body face discrimination model and the second living body face discrimination model by using a preset calibration data set;
determining corresponding activation value quantization distributions based on different thresholds, and calculating the similarity of each activation value quantization distribution and the activation value distribution of the corresponding network layer;
selecting the activation value quantization distribution corresponding to the highest similarity as the activation value quantization target value distribution of the corresponding network layer to obtain a corresponding target threshold;
determining the scale of an activation value of the model according to the target threshold;
and quantizing the model weights of the first living body face discrimination model and the second living body face discrimination model respectively by using the activation value scale to obtain the scale of the model weight.
Preferably, when there is a face image to be recognized, the process of respectively performing living body discrimination on the face image to be recognized by using the first living body face discrimination model and the second living body face discrimination model to obtain a first discrimination result and a second discrimination result specifically includes:
when the face image to be recognized exists, preprocessing the face image to be recognized;
respectively extracting the face features in the preprocessed face image to be recognized by utilizing the first living body face distinguishing model and the second living body face distinguishing model;
and respectively carrying out living body judgment on the face image to be recognized according to the extracted face features to obtain the first judgment result and the second judgment result.
Preferably, the preprocessing operation specifically includes: grayscale processing and/or image enhancement.
Preferably, after the preprocessing operation is performed on the face image to be recognized, the method further includes:
and carrying out alignment processing and size cutting on the image to be identified.
Preferably, when the target discrimination result is a non-living human face, corresponding prompt information is sent out.
In order to solve the above technical problem, the present invention further provides a device for discriminating a human face, including:
the pre-training module is used for pre-training a first living body face discrimination model based on a near-infrared image and a second living body face discrimination model based on a visible light image;
the judging module is used for respectively judging the living bodies of the face images to be recognized by utilizing the first living body face judging model and the second living body face judging model when the face images to be recognized exist, so as to obtain a first judging result and a second judging result;
and the determining module is used for determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy.
In order to solve the above technical problem, the present invention further provides an apparatus for discriminating a human face, including:
a memory for storing a computer program;
and the processor is used for realizing the steps of any one of the living human face discrimination methods when the computer program is executed.
In order to solve the above technical problem, the present invention further provides a computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps of any one of the above methods for discriminating a living human face.
The invention provides a living body face discrimination method, which is characterized in that a first living body face discrimination model based on a near-infrared image and a second living body face discrimination model based on a visible light image are trained in advance; then when the face image to be recognized exists, respectively carrying out living body discrimination on the face image to be recognized by utilizing a first living body face discrimination model and a second living body face discrimination model to obtain a first discrimination result and a second discrimination result; and determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy. Therefore, living body discrimination is carried out by utilizing the first living body face discrimination model based on the near-infrared image and the second living body face discrimination model based on the visible light image, video attack can be avoided, and discrimination accuracy is improved; in addition, the method can perform living body judgment under the condition that the input is the face image to be recognized which is only a single frame, and does not need the user to perform corresponding action according to a system instruction, so that the judgment time can be saved, the convenience of the user in living body judgment is improved, and the use experience of the user is improved.
In order to solve the technical problem, the invention also provides a device, equipment and a computer readable storage medium for distinguishing the human face of the living body, which have the beneficial effects.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a method for discriminating a human face according to an embodiment of the present invention;
FIG. 2 is a block diagram of a deep neural network;
fig. 3 is a flowchart of another living human face discrimination method according to an embodiment of the present invention;
fig. 4 is a structural diagram of an apparatus for discriminating a human face according to an embodiment of the present invention;
fig. 5 is a structural diagram of an apparatus for discriminating a human face according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The core of the embodiment of the invention is to provide a living body face distinguishing method, which can more efficiently distinguish the living body face and improve the use experience of a user on the basis of improving the distinguishing accuracy of the living body face; another core of the present invention is to provide a device, an apparatus and a computer-readable storage medium for discriminating a human face, all of which have the above advantages.
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flowchart of a method for discriminating a human face according to an embodiment of the present invention. As shown in fig. 1, a method for discriminating a human face includes:
s10: a first living body face distinguishing model based on a near infrared image and a second living body face distinguishing model based on a visible light image are trained in advance.
Specifically, in this embodiment, first, training data sets are collected, including a first training data set for training a first living human face discrimination model based on a near-infrared image and a second training data set for training a second living human face discrimination model based on a visible light image. The first training data set comprises a positive face sample and a negative face sample in a near-infrared light scene, the wavelength of the near-infrared light is 850nm, and the negative sample comprises visible light photo printing attack, near-infrared photo printing attack, 3D mask attack and the like; the second training data set comprises positive samples and negative samples of the human face in a visible light scene, wherein the negative samples comprise visible light photo printing attacks, near infrared photo printing attacks, 3D mask attacks and the like.
The training data set is then subjected to a preprocessing operation. Specifically, the face position in the near-infrared face image in the training data set is detected by a face detector, the detected face is aligned by adopting a five-point normalization operation, and is cut into a preset size (such as 112 × 112 format), and a mask is added to remove the non-face part in the image. Specifically, the process of performing the alignment processing according to the five-point normalization specifically includes: and detecting the coordinates of five points, namely the outer canthus of the left eye, the outer canthus of the right eye, the top of the nose and two corners of the mouth of the face, comparing the coordinates of the five points with the coordinates of the five points of the standard face, and aligning the detected face with the standard face to obtain a normal face picture. It can be understood that the mask is used to mask the information of the non-face part, so that the face features are more prominent. Correspondingly, the face position in the visible light face image in the training data set is detected through a face detector, the detected face is aligned by adopting a five-point normalization operation, and is cut into a preset size (such as 112 x 112 format), and a mask is added to remove the non-face part in the image. The operation on the visible light face image is similar to that on the near-infrared face image, and details are not repeated here.
And converting each image in the preprocessed training data set into a gray scale image, inputting the gray scale image into a predetermined deep neural network for training, and obtaining a first living body face discrimination model based on the near-infrared image and a second living body face discrimination model based on the visible light image.
It should be noted that, as shown in fig. 2, in the present embodiment, the structure diagram of the deep neural network is modified based on the deep neural network MobileNet V2, where an image of the deep neural network based on a near-infrared light image is input as a single-channel image of 112 × 112 × 1, and a deep convolution operation is adopted after a 4 × 4 feature map, so that the model focuses more on face information in the middle of the image, and the obtained 64-channel 2 × 2 feature map is flattened into a 256-dimensional vector. The depth neural network based on the visible light image has the same structure as the depth neural network based on the near infrared image, except that the image based on the visible light image is input as an RGB image, and a 3-channel image having a size of 112 × 112 × 3. It should be noted that, in the present embodiment, it is preferable to train out the first living face discrimination model and the second living face discrimination model by using a central difference convolution (central difference convolution); compared with the mode of training the living body face discrimination model by convolution operation in the prior art, the mode based on the central differential convolution can pay more attention to local information, so that the judgment accuracy of the trained first living body face discrimination model and the trained second living body face discrimination model is higher.
Specifically, the loss function in the present embodiment is preferably set to a local loss function. It can be understood that, in the living human face identification operation, there are many types of negative samples, and the use of the Focal loss function can effectively solve the problem of unbalance of the positive and negative samples. More specifically, the Focal loss function is specifically:
Figure 417416DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 131294DEST_PATH_IMAGE002
the parameters are used to balance the proportion of positive and negative sample losses, and
Figure 222616DEST_PATH_IMAGE002
the value of (d) is typically set to 0.25;
Figure 476879DEST_PATH_IMAGE003
the parameters are samples that make the model more interesting to classify, and
Figure 494514DEST_PATH_IMAGE003
the value of (a) is generally set to 3,
Figure 582556DEST_PATH_IMAGE004
is the probability that the sample to be tested is judged as a positive sample.
S20: when a face image to be recognized exists, respectively carrying out living body discrimination on the face image to be recognized by utilizing a first living body face discrimination model and a second living body face discrimination model to obtain a first discrimination result and a second discrimination result;
s30: and determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy.
Specifically, after a first living body face distinguishing model based on a near-infrared image and a second living body face distinguishing model based on a visible light image are trained, when a face image to be recognized exists, face features in the face image to be recognized are respectively extracted by using the first living body face distinguishing model and the second living body face distinguishing model; living body discrimination is carried out on the face image to be recognized according to the extracted face features to obtain a corresponding first discrimination result and a corresponding second discrimination result; then, a multi-mode fusion strategy is adopted, and a target discrimination result is obtained according to the first discrimination result and the second discrimination result; the target discrimination result indicates that the face image to be recognized is a living body face or a non-living body face. It should be noted that, in this embodiment, if the score in the first determination result is greater than a first preset threshold (for example, greater than 0.9), the target determination result is determined as a living body; if the first discrimination result score is lower than a second preset threshold (if less than 0.1), the target discrimination result is discriminated as a non-living body; and if the score of the first judgment result is between the first preset threshold and the second preset threshold, performing weighted fusion on the score of the first judgment result and the score of the second judgment result, comparing the score with a third preset threshold, judging the living body if the score of the first judgment result is greater than or equal to the third preset threshold, and judging the living body if the score of the first judgment result is less than the third preset threshold.
According to the living body face distinguishing method provided by the embodiment of the invention, a first living body face distinguishing model based on a near-infrared image and a second living body face distinguishing model based on a visible light image are trained in advance; then when the face image to be recognized exists, respectively carrying out living body discrimination on the face image to be recognized by utilizing a first living body face discrimination model and a second living body face discrimination model to obtain a first discrimination result and a second discrimination result; and determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy. Therefore, living body discrimination is carried out by utilizing the first living body face discrimination model based on the near-infrared image and the second living body face discrimination model based on the visible light image, video attack can be avoided, and discrimination accuracy is improved; in addition, the method can perform living body judgment under the condition that the input is the face image to be recognized which is only a single frame, and does not need the user to perform corresponding action according to a system instruction, so that the judgment time can be saved, the convenience of the user in living body judgment is improved, and the use experience of the user is improved.
As shown in fig. 3, a flowchart of another living human face discrimination method is further illustrated and optimized in this embodiment on the basis of the above embodiment, specifically, after a first living human face discrimination model based on a near-infrared image and a second living human face discrimination model based on a visible light image are trained in advance, the method further includes:
s40: obtaining a corresponding quantization scale through analog quantization training;
s50: and respectively carrying out quantization compression on the first living body face discrimination model and the second living body face discrimination model according to the quantization scale, and updating the first living body face discrimination model and the second living body face discrimination model by using the quantized and compressed models.
In this embodiment, after a first living body face discrimination model based on a near-infrared image and a second living body face discrimination model based on a visible light image are trained in advance, a corresponding quantization scale is further obtained through analog quantization training, and then the first living body face discrimination model and the second living body face discrimination model are respectively subjected to quantization compression according to the obtained quantization scale to obtain quantized compressed models; and updating the first living body face discrimination model and the second living body face discrimination model by using the quantized and compressed model.
It should be noted that, in this embodiment, the process of obtaining the corresponding quantization scale through the analog quantization training specifically includes:
determining the distribution of the activation value of each network layer in the first living body face discrimination model and the second living body face discrimination model by using a preset calibration data set;
determining corresponding activation value quantization distributions based on different thresholds, and calculating the similarity of each activation value quantization distribution and the activation value distribution of the corresponding network layer;
selecting the activation value quantization distribution corresponding to the highest similarity as the activation value quantization target value distribution of the corresponding network layer to obtain a corresponding target threshold;
determining the scale of an activation value of the model according to a target threshold;
and quantizing the model weights of the first living body face discrimination model and the second living body face discrimination model respectively by using the activation value scale to obtain the scale of the weight of the model.
Specifically, in this embodiment, the distribution of the activation values of each network layer is obtained through a calibration data set, for example, a histogram distribution of the activation values represented by 32-bit floating point numbers; then, determining a corresponding activation value quantization distribution based on different threshold values, wherein the different threshold values can be integers in a [128, 8192) interval, and the activation value quantization distribution can be a histogram distribution of activation values expressed by 8-bit integers; calculating the similarity between each activation value quantization distribution and the activation value distribution of the corresponding network layer; in the present embodiment, the corresponding similarity is preferably derived by calculating a relative entropy (relative entropy) or KL divergence (Kullback-leiblerdientgence) of each activation value quantization distribution and the activation value distribution of the corresponding network layer. Then, selecting the activation value quantization distribution corresponding to the highest similarity as the activation value quantization target value distribution of the corresponding network layer, and obtaining a corresponding target threshold; if the activation value quantitative distribution with the minimum KL divergence is selected as the activation value quantitative target value distribution of the corresponding network layer, and a corresponding target threshold value is obtained; and determining the activation value scale of the model according to the target threshold value.
Specifically, the activation scale of the model may be formulated
Figure 193797DEST_PATH_IMAGE005
And calculating to obtain the result that, wherein,
Figure 595959DEST_PATH_IMAGE006
the value of the activation of the model is represented,
Figure 530417DEST_PATH_IMAGE007
representing a model activation value quantization target value,
Figure 117256DEST_PATH_IMAGE008
representing a model activation value scale.
Activation of value scales by loading model
Figure 74848DEST_PATH_IMAGE008
The scale of the activation value of the model is fixed in the simulation quantization process
Figure 654603DEST_PATH_IMAGE008
And (4) training, namely quantizing the model weights of the first living body face discrimination model and the second living body face discrimination model respectively by using the activation value scale of the models to obtain the scale of the weight of the models.
Specifically, the formula is used:
Figure 646830DEST_PATH_IMAGE009
the model weights are directly quantized to obtain the scale of the model weights, wherein,
Figure 342253DEST_PATH_IMAGE010
represents the maximum value of the absolute value of the weight,
Figure 911775DEST_PATH_IMAGE011
represents the absolute value of the range of quantization values that the model needs to quantize, which is set to 127 in this embodiment.
In this embodiment, the activation value scale and the weight scale of the model are quantization scales, and the first living body face discrimination model and the second living body face discrimination model are quantized and compressed by using the activation value scale and the weight value scale.
Therefore, the corresponding quantization scale is obtained through the simulation quantization training; the first living body face discrimination model and the second living body face discrimination model are respectively quantized and compressed according to the quantization scale, and the first living body face discrimination model and the second living body face discrimination model are updated by using the quantized and compressed models, so that the first living body face discrimination model and the second living body face discrimination model are lighter and lighter, and the models can be deployed on ARM (Advanced RISC Machine) or DSP (digital signal processor) and other embedded devices, and are more convenient and faster to apply.
On the basis of the foregoing embodiment, this embodiment further describes and optimizes the technical solution, and specifically, in this embodiment, when a face image to be recognized exists, a process of respectively performing living body discrimination on the face image to be recognized by using a first living body face discrimination model and a second living body face discrimination model to obtain a first discrimination result and a second discrimination result includes:
when the face image to be recognized exists, preprocessing the face image to be recognized;
respectively extracting the face features in the preprocessed face image to be recognized by utilizing a first living body face distinguishing model and a second living body face distinguishing model;
and respectively carrying out living body judgment on the face image to be recognized according to the extracted face features to obtain a first judgment result and a second judgment result.
It should be noted that the main purpose of the preprocessing operation performed on the face image to be recognized is to eliminate irrelevant information in the face image to be recognized, recover useful real information, enhance the detectability of the face information, and simplify data to the maximum extent. Specifically, in this embodiment, a preprocessing operation is further performed on the face image to be recognized, and then the face features in the preprocessed face image to be recognized are respectively extracted by using the first living body face discrimination model and the second living body face discrimination model, so that the face features of the face image to be recognized are more accurately extracted.
As a preferred embodiment, the pretreatment operation specifically comprises: grayscale processing and/or image enhancement.
Specifically, the grayscale processing refers to an operation of displaying a face image to be recognized in a grayscale color mode; the gray processing method includes four methods, i.e., a component method, a maximum value method, an average value method, and a weighted average method, and the specific operation mode of the gray processing is not limited in this embodiment.
The image enhancement means that the overall or local characteristics of the face image to be recognized are purposefully emphasized, the original unclear face image to be recognized is changed into clear or the characteristics of the face part are emphasized, the difference between the characteristics of different parts in the face image to be recognized is enlarged, uninteresting characteristics are inhibited, the recognition and characteristic extraction effects of the face image to be recognized can be enhanced, and the face characteristics of the face image to be recognized can be further accurately extracted.
As a preferred embodiment, after performing a preprocessing operation on a face image to be recognized, the embodiment further includes: and carrying out alignment processing and size cutting on the image to be recognized.
The alignment processing refers to performing face alignment on a face region by adopting five-point normalization operation after detecting the size and the position of a face by using a face detector; and the size cutting refers to cutting the face image to be recognized according to the size of the training data set.
In the embodiment, the method further comprises the steps of carrying out alignment processing and size cutting on the face image to be recognized, so that the influence of non-face part characteristics on living body judgment accuracy can be reduced, and the judgment accuracy is improved.
On the basis of the above embodiment, the present embodiment further describes and optimizes the technical solution, and specifically, in the present embodiment, when the target determination result is a non-living human face, corresponding prompt information is sent.
Specifically, in this embodiment, the target discrimination result has two cases, that is, the face image to be recognized is a living face or the face image to be recognized is a non-living face. And when the judgment result is a non-living human face, the current system is subjected to illegal attack, so that the prompting device is triggered to send out corresponding prompting information. It should be noted that, in actual operation, the prompting device may specifically be a buzzer, an indicator light, a display, or the like, and the corresponding prompting information is corresponding information sent when the prompting device is running, such as a sent buzzer sound, a sent flashing indicator light, or content displayed by the display, and the specific type of the prompting information is not limited in this embodiment.
Therefore, in the embodiment, the corresponding prompt information is further sent when the target judgment result is a non-living human face, so that the use experience of the user can be further improved.
The above detailed description is made on the embodiment of the method for discriminating a living human face provided by the present invention, and the present invention also provides a device, an apparatus, and a computer-readable storage medium for discriminating a living human face corresponding to the method.
Fig. 4 is a structural diagram of an apparatus for discriminating a living body face according to an embodiment of the present invention, and as shown in fig. 4, the apparatus for discriminating a living body face includes:
a pre-training module 41, configured to pre-train a first living body face discrimination model based on a near-infrared image and a second living body face discrimination model based on a visible light image;
the judging module 42 is configured to, when there is a face image to be recognized, perform living body judgment on the face image to be recognized by using the first living body face judgment model and the second living body face judgment model respectively to obtain a first judgment result and a second judgment result;
and the determining module 43 is configured to determine the target discrimination result according to the first discrimination result and the second discrimination result by using a multi-mode fusion strategy.
The device for distinguishing the living human face provided by the embodiment of the invention has the beneficial effect of the method for distinguishing the living human face.
As a preferred embodiment, an apparatus for discriminating a human face of a living body further includes:
the scale training module is used for obtaining a corresponding quantization scale through analog quantization training;
and the quantization compression module is used for respectively performing quantization compression on the first living body face discrimination model and the second living body face discrimination model according to the quantization scale, and updating the first living body face discrimination model and the second living body face discrimination model by using the quantized and compressed models.
As a preferred embodiment, the scale training module specifically includes:
the first determining unit is used for determining the distribution of the activation value of each network layer in the first living body face distinguishing model and the second living body face distinguishing model by using a preset calibration data set;
the calculation unit is used for determining corresponding activation value quantization distributions based on different threshold values and calculating the similarity between each activation value quantization distribution and the corresponding activation value distribution of the network layer;
the selection unit is used for selecting the activation value quantization distribution corresponding to the highest similarity as the activation value quantization target value distribution of the corresponding network layer to obtain a corresponding target threshold;
the second determining unit is used for determining the activation value scale of the model according to the target threshold;
and the quantization unit is used for quantizing the model weights of the first living body face discrimination model and the second living body face discrimination model respectively by using the activated value scale to obtain the scale of the model weight.
As a preferred embodiment, the determining module specifically includes:
the preprocessing unit is used for preprocessing the face image to be recognized when the face image to be recognized exists;
the extraction unit is used for respectively extracting the face features in the preprocessed face image to be recognized by utilizing the first living body face discrimination model and the second living body face discrimination model;
and the judging unit is used for respectively judging the living body of the face image to be recognized according to the extracted face features to obtain a first judging result and a second judging result.
As a preferred embodiment, an apparatus for discriminating a human face of a living body further includes:
and the prompting module is used for sending out corresponding prompting information when the target judgment result is a non-living human face.
Fig. 5 is a structural diagram of an apparatus for discriminating a living body face according to an embodiment of the present invention, and as shown in fig. 5, the apparatus for discriminating a living body face includes:
a memory 51 for storing a computer program;
and a processor 52 for implementing the steps of the above method for discriminating a human face of a living body when executing a computer program.
The living human face distinguishing device provided by the embodiment of the invention has the beneficial effect of the living human face distinguishing method.
In order to solve the above technical problem, the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the above method for discriminating a living human face.
The computer-readable storage medium provided by the embodiment of the invention has the beneficial effect of the living human face distinguishing method.
The method, apparatus, device and computer readable storage medium for discriminating a living human face provided by the present invention are described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are set forth only to help understand the method and its core ideas of the present invention. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

Claims (8)

1. A method for discriminating a human face of a living body, comprising:
training a first living body face discrimination model based on a near infrared image and a second living body face discrimination model based on a visible light image in advance;
obtaining a corresponding quantization scale through analog quantization training; wherein the quantization scale comprises an activation value scale of the model and a scale of weights of the model;
determining the distribution of the activation value of each network layer in the first living body face discrimination model and the second living body face discrimination model by using a preset calibration data set;
determining corresponding activation value quantization distributions based on different thresholds, and calculating the similarity of each activation value quantization distribution and the activation value distribution of the corresponding network layer;
selecting the activation value quantization distribution corresponding to the highest similarity as the activation value quantization target value distribution of the corresponding network layer to obtain a corresponding target threshold;
determining the activation value scale of the model according to the target threshold;
respectively quantizing the model weights of the first living body face distinguishing model and the second living body face distinguishing model by using the scale of the activation value of the model to obtain the scale of the weight of the model;
respectively carrying out quantization compression on the first living body face discrimination model and the second living body face discrimination model according to the quantization scale, and updating the first living body face discrimination model and the second living body face discrimination model by using the quantized and compressed models;
when a face image to be recognized exists, respectively carrying out living body discrimination on the face image to be recognized by utilizing the first living body face discrimination model and the second living body face discrimination model to obtain a first discrimination result and a second discrimination result;
and determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy.
2. The method according to claim 1, wherein the process of respectively performing living body discrimination on the face image to be recognized by using the first living body face discrimination model and the second living body face discrimination model when the face image to be recognized exists to obtain a first discrimination result and a second discrimination result specifically comprises:
when the face image to be recognized exists, preprocessing the face image to be recognized;
respectively extracting the face features in the preprocessed face image to be recognized by utilizing the first living body face distinguishing model and the second living body face distinguishing model;
and respectively carrying out living body judgment on the face image to be recognized according to the extracted face features to obtain the first judgment result and the second judgment result.
3. The method according to claim 2, characterized in that said preprocessing operation comprises in particular: grayscale processing and/or image enhancement.
4. The method according to claim 3, wherein after the preprocessing operation on the face image to be recognized, the method further comprises:
and carrying out alignment processing and size cutting on the image to be identified.
5. The method according to any one of claims 1 to 4, wherein when the target discrimination result is a non-living human face, corresponding prompt information is sent out.
6. An apparatus for discriminating a human face of a living body, comprising:
the pre-training module is used for pre-training a first living body face discrimination model based on a near-infrared image and a second living body face discrimination model based on a visible light image;
the scale training module is used for obtaining a corresponding quantization scale through analog quantization training; wherein the quantization scale comprises an activation value scale of the model and a scale of weights of the model;
wherein, the scale training module specifically comprises:
a first determining unit, configured to determine, by using a preset calibration data set, an activation value distribution of each network layer in the first living body face discrimination model and the second living body face discrimination model;
the calculation unit is used for determining corresponding activation value quantization distributions based on different threshold values and calculating the similarity between each activation value quantization distribution and the corresponding activation value distribution of the network layer;
the selection unit is used for selecting the activation value quantization distribution corresponding to the highest similarity as the activation value quantization target value distribution of the corresponding network layer to obtain a corresponding target threshold;
the second determining unit is used for determining the activation value scale of the model according to the target threshold;
the quantization unit is used for quantizing the model weights of the first living body face discrimination model and the second living body face discrimination model respectively by utilizing the activated value scale to obtain the scale of the weight of the models;
the quantization compression module is used for respectively performing quantization compression on the first living body face discrimination model and the second living body face discrimination model according to the quantization scale and updating the first living body face discrimination model and the second living body face discrimination model by using the quantized and compressed models;
the judging module is used for respectively judging the living bodies of the face images to be recognized by utilizing the first living body face judging model and the second living body face judging model when the face images to be recognized exist, so as to obtain a first judging result and a second judging result;
and the determining module is used for determining a target discrimination result according to the first discrimination result and the second discrimination result by adopting a multi-mode fusion strategy.
7. An apparatus for discriminating a human face of a living body, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the method for discriminating a living human face according to any one of claims 1 to 5 when executing the computer program.
8. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a processor, implements the steps of the method for discriminating a living body face according to any one of claims 1 to 5.
CN202010248023.XA 2020-04-01 2020-04-01 Method, device, equipment and storage medium for distinguishing living human face Active CN111126366B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010248023.XA CN111126366B (en) 2020-04-01 2020-04-01 Method, device, equipment and storage medium for distinguishing living human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010248023.XA CN111126366B (en) 2020-04-01 2020-04-01 Method, device, equipment and storage medium for distinguishing living human face

Publications (2)

Publication Number Publication Date
CN111126366A true CN111126366A (en) 2020-05-08
CN111126366B CN111126366B (en) 2020-06-30

Family

ID=70493947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010248023.XA Active CN111126366B (en) 2020-04-01 2020-04-01 Method, device, equipment and storage medium for distinguishing living human face

Country Status (1)

Country Link
CN (1) CN111126366B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860405A (en) * 2020-07-28 2020-10-30 Oppo广东移动通信有限公司 Quantification method and device of image recognition model, computer equipment and storage medium
CN111931594A (en) * 2020-07-16 2020-11-13 广州广电卓识智能科技有限公司 Face recognition living body detection method and device, computer equipment and storage medium
CN111967319A (en) * 2020-07-14 2020-11-20 高新兴科技集团股份有限公司 Infrared and visible light based in-vivo detection method, device, equipment and storage medium
CN112329624A (en) * 2020-11-05 2021-02-05 北京地平线信息技术有限公司 Living body detection method and apparatus, storage medium, and electronic device
CN113128481A (en) * 2021-05-19 2021-07-16 济南博观智能科技有限公司 Face living body detection method, device, equipment and storage medium
CN115512428A (en) * 2022-11-15 2022-12-23 华南理工大学 Human face living body distinguishing method, system, device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015040001A2 (en) * 2013-09-19 2015-03-26 Muehlbauer Ag Device, system and method for identifying a person
US9202105B1 (en) * 2012-01-13 2015-12-01 Amazon Technologies, Inc. Image analysis for user authentication
CN108509984A (en) * 2018-03-16 2018-09-07 新智认知数据服务有限公司 Activation value quantifies training method and device
CN109766800A (en) * 2018-12-28 2019-05-17 华侨大学 A kind of construction method of mobile terminal flowers identification model
CN110008783A (en) * 2018-01-04 2019-07-12 杭州海康威视数字技术股份有限公司 Human face in-vivo detection method, device and electronic equipment based on neural network model
CN110276301A (en) * 2019-06-24 2019-09-24 泰康保险集团股份有限公司 Face identification method, device, medium and electronic equipment
CN110659617A (en) * 2019-09-26 2020-01-07 杭州艾芯智能科技有限公司 Living body detection method, living body detection device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9202105B1 (en) * 2012-01-13 2015-12-01 Amazon Technologies, Inc. Image analysis for user authentication
WO2015040001A2 (en) * 2013-09-19 2015-03-26 Muehlbauer Ag Device, system and method for identifying a person
CN110008783A (en) * 2018-01-04 2019-07-12 杭州海康威视数字技术股份有限公司 Human face in-vivo detection method, device and electronic equipment based on neural network model
CN108509984A (en) * 2018-03-16 2018-09-07 新智认知数据服务有限公司 Activation value quantifies training method and device
CN109766800A (en) * 2018-12-28 2019-05-17 华侨大学 A kind of construction method of mobile terminal flowers identification model
CN110276301A (en) * 2019-06-24 2019-09-24 泰康保险集团股份有限公司 Face identification method, device, medium and electronic equipment
CN110659617A (en) * 2019-09-26 2020-01-07 杭州艾芯智能科技有限公司 Living body detection method, living body detection device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JACOB BENOIT ET AL.: "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference", 《ARXIV: LEARNING》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967319A (en) * 2020-07-14 2020-11-20 高新兴科技集团股份有限公司 Infrared and visible light based in-vivo detection method, device, equipment and storage medium
CN111967319B (en) * 2020-07-14 2024-04-12 高新兴科技集团股份有限公司 Living body detection method, device, equipment and storage medium based on infrared and visible light
CN111931594A (en) * 2020-07-16 2020-11-13 广州广电卓识智能科技有限公司 Face recognition living body detection method and device, computer equipment and storage medium
CN111860405A (en) * 2020-07-28 2020-10-30 Oppo广东移动通信有限公司 Quantification method and device of image recognition model, computer equipment and storage medium
CN112329624A (en) * 2020-11-05 2021-02-05 北京地平线信息技术有限公司 Living body detection method and apparatus, storage medium, and electronic device
CN113128481A (en) * 2021-05-19 2021-07-16 济南博观智能科技有限公司 Face living body detection method, device, equipment and storage medium
CN115512428A (en) * 2022-11-15 2022-12-23 华南理工大学 Human face living body distinguishing method, system, device and storage medium

Also Published As

Publication number Publication date
CN111126366B (en) 2020-06-30

Similar Documents

Publication Publication Date Title
CN111126366B (en) Method, device, equipment and storage medium for distinguishing living human face
WO2020151489A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
US10699103B2 (en) Living body detecting method and apparatus, device and storage medium
WO2020207423A1 (en) Skin type detection method, skin type grade classification method and skin type detection apparatus
CN109815797B (en) Living body detection method and apparatus
CN109117755B (en) Face living body detection method, system and equipment
US20190026606A1 (en) To-be-detected information generating method and apparatus, living body detecting method and apparatus, device and storage medium
CN107832721B (en) Method and apparatus for outputting information
CN110102051B (en) Method and device for detecting game plug-in
CN109858375A (en) Living body faces detection method, terminal and computer readable storage medium
CN111626371A (en) Image classification method, device and equipment and readable storage medium
CN109325472B (en) Face living body detection method based on depth information
CN109271941A (en) A kind of biopsy method for taking the photograph attack based on anti-screen
CN108108651B (en) Method and system for detecting driver non-attentive driving based on video face analysis
CN112183356A (en) Driving behavior detection method and device and readable storage medium
CN114842524B (en) Face false distinguishing method based on irregular significant pixel cluster
CN110059607B (en) Living body multiplex detection method, living body multiplex detection device, computer equipment and storage medium
CN113627256B (en) False video inspection method and system based on blink synchronization and binocular movement detection
CN111881706A (en) Living body detection, image classification and model training method, device, equipment and medium
CN112613471B (en) Face living body detection method, device and computer readable storage medium
CN107423864A (en) The analysis method and device of crewman's behavior
CN112926364B (en) Head gesture recognition method and system, automobile data recorder and intelligent cabin
CN115376210B (en) Drowning behavior identification method, device, equipment and medium for preventing drowning in swimming pool
CN108985350B (en) Method and device for recognizing blurred image based on gradient amplitude sparse characteristic information, computing equipment and storage medium
CN111163332A (en) Video pornography detection method, terminal and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant