CN109409322B - Living body detection method and device, face recognition method and face detection system - Google Patents

Living body detection method and device, face recognition method and face detection system Download PDF

Info

Publication number
CN109409322B
CN109409322B CN201811329008.7A CN201811329008A CN109409322B CN 109409322 B CN109409322 B CN 109409322B CN 201811329008 A CN201811329008 A CN 201811329008A CN 109409322 B CN109409322 B CN 109409322B
Authority
CN
China
Prior art keywords
living body
face
processing
feature
characteristic data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811329008.7A
Other languages
Chinese (zh)
Other versions
CN109409322A (en
Inventor
王耀华
刘志伟
陈宇
刘巍
殷向阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201811329008.7A priority Critical patent/CN109409322B/en
Publication of CN109409322A publication Critical patent/CN109409322A/en
Application granted granted Critical
Publication of CN109409322B publication Critical patent/CN109409322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The disclosure provides a living body detection method and device, a face recognition method and a face detection system, and relates to the technical field of face recognition. The living body detection method of the present disclosure includes: acquiring face picture characteristic data in a face image through deep learning; and processing the characteristic data of the face picture through an attention mechanism based on a channel domain to determine a living body recognition result, wherein the living body recognition result comprises that the face image is a living body image or a non-living body image. By the method, the face image feature data extracted in the deep learning can be applied to living body detection, the feature data is processed through an attention mechanism based on a channel domain, the face image is identified to be a living body or a non-living body image, a user does not need to cooperate to make a specified action, convenience and efficiency are improved, and the accuracy of the living body detection is improved.

Description

Living body detection method and device, face recognition method and face detection system
Technical Field
The present disclosure relates to the field of face recognition technology, and in particular, to a method and an apparatus for detecting a living body, a face recognition method, and a face detection system.
Background
With the development of the biometric technology, the face recognition technology has been well developed, and under the condition of good illumination condition and posture, the face recognition system can accurately detect and recognize the face.
The related face recognition system can confirm the identity of the user in the image and pass safety verification under the condition of not distinguishing non-living body information such as photos, videos and the like. Today, with the rapid development of network information, the acquisition cost of information such as photos and videos of users is lower and lower, which leads to the reduction of the security of the face recognition system.
In order to enhance the security of the face recognition system, a security system for living body recognition needs to be added before recognition.
Disclosure of Invention
The inventor finds that the related living body detection technology often requires a user to actively cooperate with the living body detection technology to make a specified action, is complicated in operation and low in screening efficiency, or can only screen non-living body images of static photos, and is low in accuracy.
One of the purposes of the present disclosure is to improve the efficiency and accuracy of the in-vivo detection on the premise of ensuring the convenience of the in-vivo detection.
According to an aspect of some embodiments of the present disclosure, there is provided a method of living body detection, including: acquiring face picture characteristic data in a face image through deep learning; and processing the characteristic data of the face picture through an attention mechanism based on a channel domain to determine a living body recognition result, wherein the living body recognition result comprises that the face image is a living body image or a non-living body image.
In some embodiments, the face image feature data is extracted from the face image by a neural network model of a face recognition system.
In some embodiments, processing the facial picture feature data via a channel domain-based attention mechanism comprises: acquiring biological characteristics and non-biological characteristics through a neural network convolution layer according to the face picture characteristic data; obtaining pooling characteristics through a pooling layer; and according to the pooling characteristics, correlating biological characteristics and non-biological characteristics through a fully-connected neural network to obtain a processing result.
In some embodiments, correlating the biometric characteristic and the non-biometric characteristic via a fully-connected neural network, the obtaining the processing result comprising: and inputting the pooling characteristics into the first full connection layer, sequentially passing through a ReLU (Rectified Linear Unit), the second full connection layer and an S-shaped growth curve sigmoid function, and acquiring a processing result so as to determine a living body identification result according to the processing result.
In some embodiments, processing the facial picture feature data via a channel domain-based attention mechanism comprises: processing the face picture characteristic data through an attention mechanism based on a channel domain to obtain primary processing data; and (4) processing the primary processing data through a convolutional neural network and then through an attention mechanism based on a channel domain again to obtain a living body identification result.
In some embodiments, processing the facial picture feature data via a channel domain-based attention mechanism comprises: processing the face image feature data through an attention mechanism based on a channel domain; and circulating the processing result through the attention mechanism processing based on the channel domain after passing through the convolutional neural network until the number of times of the attention mechanism processing based on the channel domain reaches a preset circulation number, and determining the living body identification result according to the determination processing result.
In some embodiments, the in-vivo detection method further comprises: and carrying out scale scaling processing on each channel in the feature data of the face picture by using the output of the sigmoid function to obtain optimized face picture feature data so as to execute face recognition according to the optimized face picture feature data.
In some embodiments, the in-vivo detection method further comprises: enhancing biological characteristics in the face image characteristic data by using a processing result of processing the face image characteristic data through an attention mechanism based on a channel domain to obtain optimized face image characteristic data; and executing face recognition according to the optimized face picture characteristic data.
By the method, the face image feature data extracted in the deep learning can be applied to living body detection, the feature data is processed through an attention mechanism based on a channel domain, the face image is identified to be a living body or a non-living body image, a user does not need to cooperate to make a specified action, convenience and efficiency are improved, and the accuracy of the living body detection is improved.
According to an aspect of some embodiments of the present disclosure, a face recognition method is provided, including: extracting face picture characteristic data from a face image through a neural network model; determining a result of the living body identification by any of the above living body detection methods; enhancing biological characteristics in the face image characteristic data by using a processing result of processing the face image characteristic data through an attention mechanism based on a channel domain to obtain optimized face image characteristic data; and executing face recognition according to the optimized face picture characteristic data.
By the method, the face image feature data extracted by deep learning in face recognition can be applied to living body detection, the feature data is processed by an attention mechanism based on a channel domain, the face image is recognized to be a living body or a non-living body image, a user does not need to cooperate to make a specified action, convenience and efficiency are improved, and the accuracy of the living body detection is improved; the biological characteristics in the characteristic data of the face picture can be enhanced, and the accuracy of face recognition is improved.
According to an aspect of still other embodiments of the present disclosure, there is provided a living body detection apparatus including: the feature acquisition module is configured to acquire face image feature data in a face image through deep learning; the feature processing module is configured to process the face picture feature data through an attention mechanism based on a channel domain; and the living body recognition module is configured to determine a living body recognition result according to the processing result of the characteristic processing module, wherein the living body recognition result comprises that the face image is a living body image or a non-living body image.
In some embodiments, the feature acquisition module is a neural network model of a face recognition system.
In some embodiments, the feature processing module comprises: the convolution layer is configured to acquire biological features and non-biological features according to the face picture feature data; a pooling layer configured to obtain pooled features; and the full-connection processing unit is configured to associate the biological characteristics and the non-biological characteristics through the full-connection neural network according to the pooling characteristics and acquire a processing result.
In some embodiments, the fully connected processing unit is configured to: and inputting the pooling characteristics into the first full connection layer, and sequentially passing through the ReLU, the second full connection layer and the sigmoid function to obtain a processing result so as to determine a living body identification result according to the processing result.
In some embodiments, the living body detection device comprises more than two feature processing modules which are connected in series at intervals through a convolutional neural network; the living body identification module is configured to determine a living body identification result from a processing result of the last feature processing module in the series.
In some embodiments, the feature processing module is further configured to perform scale scaling processing on each channel in the face picture feature data by using the output of the sigmoid function, and obtain optimized face picture feature data, so as to perform face recognition according to the optimized face picture feature data.
In some embodiments, the feature processing module is further configured to enhance the biological features in the face image feature data by using the processing result of the feature processing module to obtain optimized face image feature data; the living body detecting apparatus further includes: and the face recognition module is configured to execute face recognition according to the optimized face picture characteristic data.
According to an aspect of still further embodiments of the present disclosure, there is provided a living body detection apparatus including: a memory; and a processor coupled to the memory, the processor configured to perform any of the above methods of activity detection based on instructions stored in the memory.
The living body detection device can apply the human face image feature data extracted in the deep learning to the living body detection, the feature data is processed through the attention mechanism based on the channel domain, the human face image is identified to be a living body or a non-living body image, the user does not need to cooperate to make a specified action, the convenience and the efficiency are improved, and the accuracy of the living body detection is improved.
According to an aspect of some embodiments of the present disclosure, a face detection system is provided, including: any of the above living body detecting means; and, a face recognition device configured to: extracting face picture characteristic data from a face image; enhancing biological characteristics in the face image characteristic data by utilizing a living body detection device through a processing result of processing the face image characteristic data based on an attention mechanism of a channel domain to obtain optimized face image characteristic data; and executing face recognition according to the optimized face picture characteristic data.
According to an aspect of some embodiments of the present disclosure, a face detection system is provided, including: a memory; and a processor coupled to the memory, the processor configured to execute a face recognition method based on the instructions stored in the memory.
The living body detection system can apply the characteristic data of the face image extracted by deep learning in face recognition to living body detection, process the characteristic data through an attention mechanism based on a channel domain, recognize the face image as a living body or a non-living body image, and does not need a user to cooperate to make a specified action, so that the convenience and the efficiency are improved, and the accuracy of the living body detection is improved; the biological characteristics in the characteristic data of the face picture can be enhanced, and the accuracy of face recognition is improved.
Further, according to an aspect of some embodiments of the present disclosure, a computer-readable storage medium is proposed, on which computer program instructions are stored, which instructions, when executed by a processor, implement the steps of any of the methods above.
By executing the instructions on the computer-readable storage medium, the human face image feature data extracted in the deep learning can be applied to living body detection, the feature data is processed through an attention mechanism based on a channel domain, the human face image is identified to be a living body or a non-living body image, a user does not need to cooperate to make a specified action, convenience and efficiency are improved, and the accuracy of the living body detection is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and not to limit the disclosure. In the drawings:
FIG. 1 is a flow chart of one embodiment of a liveness detection method of the present disclosure.
FIG. 2 is a flow chart of one embodiment of a channel domain based attention mechanism process in the liveness detection method of the present disclosure.
FIG. 3 is a flow chart of another embodiment of a liveness detection method of the present disclosure.
Fig. 4 is a flowchart of an embodiment of a face recognition method of the present disclosure.
FIG. 5 is a schematic view of one embodiment of a liveness detection device of the present disclosure.
FIG. 6 is a schematic diagram of one embodiment of a feature processing module in the liveness detection device of the present disclosure.
FIG. 7 is a schematic view of another embodiment of a processing procedure of the living body detecting device of the present disclosure.
Fig. 8 is a schematic diagram of an embodiment of a face detection system of the present disclosure.
Fig. 9 is a schematic diagram of an embodiment of a liveness detection device or a face detection system of the present disclosure.
Fig. 10 is a schematic diagram of another embodiment of a liveness detection device or a face detection system of the present disclosure.
Detailed Description
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
The inventor finds that the related in-vivo detection method has several ideas:
1. and performing living human face detection by utilizing a symbiotic matrix and wavelet analysis. Carrying out gray level compression on gray level images of the face area, then respectively calculating co-occurrence matrixes, and then extracting four texture characteristic quantities to calculate a mean value and a variance on the basis of the gray level co-occurrence matrixes; meanwhile, performing secondary decomposition on the original image by utilizing a Haar wavelet base, extracting a sub-band coefficient matrix, and then solving the mean value and the variance; and finally, taking all the characteristic values as samples to be detected, sending the samples to a trained SVM (Support Vector Machine) for detection, and classifying and identifying real and fake face images. However, this method can only discriminate photo spoofing, and is not effective for video spoofing.
2. Inputting continuous face images (discarding if two adjacent face images are not in the same state, and repeating a plurality of continuous face images), determining the pupil position of each face image and cutting out the eye region; the method comprises the steps of training open-eye and closed-eye samples through a support vector machine training method and an iterative algorithm AdaBoost, finally judging the open-close state of the eyeball, and judging through a living body if a blinking process exists.
3. An action set (comprising blinking, eyebrow raising, eye closing, glaring, smiling and the like) is predefined, when a user performs a live examination, the system selects one or more actions from the action set each time, the times of completing the actions are randomly specified, and the user is required to complete the actions within a specified time. The mode also needs active cooperation of users, is complicated to use, has low efficiency, is easily influenced by external environment, and has low success rate of passing detection.
A flowchart of one embodiment of a liveness detection method of the present disclosure is shown in fig. 1.
In step 101, facial image feature data in a facial image is obtained through deep learning. In one embodiment, the facial picture feature data may be extracted from the facial image by a neural network model of a face recognition system.
Suppose that
Figure BDA0001859467000000061
For face picture feature, H represents the height of the picture, W represents the length of the picture, and C represents the number of channels of the picture (e.g., the number of channels of a standard RGB picture is 3). The convolution layer of the deep neural network obtains new picture characteristics by convolution calculation by using convolution kernels and the previous layer of picture characteristics, and each convolution kernel can generate a new channel on the existing channel. After the human face picture characteristic I passes through the convolution layer with N convolution kernels, a new characteristic is generated, and the picture characteristic is expressed as
Figure BDA0001859467000000071
Convolution keeps the picture size, and C' is N · C is the number of channels of the new feature. After passing through the pooling layerHuman face picture data features
Figure BDA0001859467000000072
Will be sampled into new picture features
Figure BDA0001859467000000073
Therefore, after the data characteristics of the human face image pass through a plurality of layers of deep learning neural networks,
Figure BDA0001859467000000074
will be extracted features
Figure BDA0001859467000000075
H ' represents the height of the picture after passing through the deep learning neural network, W ' represents the length of the picture after passing through the deep learning neural network, and C ' represents the channel number of the picture of the deep learning neural network.
In step 102, the living body recognition result is determined by processing the face picture feature data based on the attention mechanism of the channel domain. The living body recognition result comprises that the face image is a living body image or a non-living body image. In one embodiment, the human face picture can be characterized by adopting an attention mechanism based on a channel domain
Figure BDA0001859467000000076
Internal conversion to biological characteristics
Figure BDA0001859467000000077
And abiotic features
Figure BDA0001859467000000078
Wherein, CpNumber of channels being a biological feature, CnAnd the number of channels of the non-biological features is used for mutually interacting the two features, and in the case that the face image is a non-living image, the biological features are inhibited and cannot be identified, and the face image is determined to be the non-living image.
By the method, the face image feature data extracted in the deep learning can be applied to living body detection, the feature data is processed through an attention mechanism based on a channel domain, the face image is identified to be a living body or a non-living body image, a user does not need to cooperate to make a specified action, convenience and efficiency are improved, and the accuracy of the living body detection is improved.
A flowchart of one embodiment of the channel domain-based attention mechanism processing in the liveness detection method of the present disclosure is shown in fig. 2.
In step 201, according to the face image feature data, obtaining the biological features through the neural network convolution layer
Figure BDA0001859467000000079
And abiotic features
Figure BDA00018594670000000710
In step 202, all features are globally pooled by the pooling layer, resulting in pooled features
Figure BDA00018594670000000711
In step 203, the biological features and the non-biological features are associated through the fully-connected neural network according to the pooled features, and a processing result is obtained.
In one embodiment, pooled features may be input into the first fully-connected layer and passed through the ReLU, the second fully-connected layer in sequence. The two full-connection layers associate biological characteristics with non-biological characteristics to jointly determine whether the picture characteristics are living characteristics, and a sigmoid function unit acquires a processing result
Figure BDA0001859467000000081
It is possible to determine whether or not the information in the picture is the living body information depending on the processing result.
By the method, the biological features and the non-biological features can be extracted through the convolutional layer, the biological features are associated with the non-biological features through the full-connection layer, and the attention mechanism based on the channel domain is used for processing the human face image feature data.
A flow chart of another embodiment of the liveness detection method of the present disclosure is shown in fig. 3.
In step 301, facial image feature data in a facial image is obtained through deep learning.
In step 302, the face picture feature data is processed through an attention mechanism based on a channel domain, and a processing result is obtained.
In step 303, the processing result is processed again by the attention mechanism based on the channel domain after being processed by the convolutional neural network, and a new processing result is obtained.
In step 304, it is determined whether the number of executions of step 303 has reached a predetermined number of cycles. If the predetermined number of cycles is reached, go to step 305; if the predetermined number of cycles has not been reached, step 303 is executed. In one embodiment, the parameters of the attention mechanism processing based on the channel domain at each time can be different, and different parameters such as the channel times and the like can be set according to needs.
In step 305, a living body identification result is determined from the processing result of the last cycle processing.
By the method, after the features are processed by the attention mechanism based on the channel domain, the picture features can be activated, repeated interactive recognition and detection of the features can be realized by processing again, and the detection accuracy is improved.
In one embodiment, the predetermined number of cycles may be 1, that is, the primary processing data is obtained after the facial image feature data is processed by the attention mechanism based on the channel domain, the primary processing data is processed by the convolutional neural network, the facial image feature data is processed again by the attention mechanism based on the channel domain to obtain a processing result, and the living body recognition result is determined according to the processing result, so that the recognition efficiency is ensured on the premise of improving the accuracy.
In one embodiment, as shown in fig. 3, step 306 may be further included: and enhancing the biological characteristics in the face image characteristic data by using the processing result of the last cycle processing to obtain optimized face image characteristic data so as to execute face recognition according to the optimized face image characteristic data.
In one embodiment, the output of the sigmoid function may be utilized
Figure BDA0001859467000000091
Carrying out scale scaling processing on each channel in the characteristic data of the face picture, for example, using the following formula:
Figure BDA0001859467000000092
Figure BDA0001859467000000093
the method is a scale operator, and can correspondingly enlarge or reduce the corresponding scale of the channel information of each channel i in the picture characteristics, namely:
F′{i}=F{i}*M{i}
by the method, the biological characteristics in the characteristics can be enhanced under the condition that the image is a living body image, the characteristic data of the optimized human face image can be obtained, and the accuracy of human face recognition can be improved.
A flow chart of one embodiment of a face recognition method of the present disclosure is shown in fig. 4.
In step 401, facial image feature data is extracted from the facial image through the neural network model. In one embodiment, the feature data of the face picture can be obtained by using a feature extraction function in the related face recognition technology.
In step 402, a living body recognition result is determined using any of the face recognition methods mentioned above.
In step 403, the biological features in the face image feature data are enhanced by using the processing result of processing the face image feature data through the attention mechanism based on the channel domain in step 402, and optimized face image feature data are obtained.
In step 404, face recognition is performed according to the optimized face picture feature data.
By the method, the face image feature data extracted by deep learning in face recognition can be applied to living body detection, the feature data is processed by an attention mechanism based on a channel domain, the face image is recognized to be a living body or a non-living body image, a user does not need to cooperate to make a specified action, convenience and efficiency are improved, and the accuracy of the living body detection is improved; the biological characteristics in the characteristic data of the face picture can be enhanced, and the accuracy of face recognition is improved.
In one embodiment, the face recognition can be performed only when the face image is determined to be the living body image according to the requirement, so that the processing efficiency is improved, and the operation load is reduced. In another embodiment, the face recognition and the living body judgment can be respectively carried out, and the results are synchronously output, so that the living body recognition result is also provided while the face recognition is realized, the output result can be enriched, and the method is conveniently applied to different application scenes according to the needs.
A schematic view of one embodiment of the liveness detection device of the present disclosure is shown in FIG. 5. The feature obtaining module 502 can obtain the face image feature data in the face image through deep learning. In one embodiment, the facial picture feature data may be extracted from the facial image by a neural network model of a face recognition system. The feature processing module 502 can process the face image feature data through an attention mechanism based on a channel domain to obtain a processing result. In one embodiment, the living body recognition result includes whether the face image is a living body image or a non-living body image. The living body recognition module 503 can determine a living body recognition result including whether the face image is a living body image or a non-living body image according to the processing result of the feature processing module. In one embodiment, the living body recognition result is determined according to the processing result of the feature processing module, the living body recognition result comprises that the human face image is a living body image or a non-living body image, the internal part of the human face image feature can be converted into a biological feature and a non-biological feature by adopting an attention mechanism based on a channel domain, and the two features are interacted to obtain the processing result. In the case where the face image is a non-living body image, the biometrics characteristic is suppressed so as not to be recognized. The living body recognition module 503 can determine whether the face image is a living body or a non-living body image using the processing result.
The living body detection device can apply the human face image feature data extracted in the deep learning to the living body detection, the feature data is processed through the attention mechanism based on the channel domain, the human face image is identified to be a living body or a non-living body image, the user does not need to cooperate to make a specified action, the convenience and the efficiency are improved, and the accuracy of the living body detection is improved.
In one embodiment, the living body detection device may include two or more feature processing modules connected in series at intervals by a convolutional neural network, and the living body identification module determines a living body identification result according to a processing result of the last feature processing module connected in series. By adopting the device, after the characteristics are processed by the attention mechanism based on the channel domain, the picture characteristics can be activated, repeated interactive recognition and detection of the characteristics can be realized by processing again, and the detection accuracy is improved. In one embodiment, the parameters of each feature processing module may be different, and different parameters such as the number of channels may be set as needed. In one embodiment, the number of the feature processing modules connected in series can be two, so that the recognition efficiency is ensured on the premise of improving the accuracy.
In one embodiment, the feature processing module may further enhance the biological features in the face image feature data according to the processing result of the feature processing module, so as to obtain optimized face image feature data. As shown in fig. 5, the living body detecting apparatus may further include a face recognition module 504 capable of determining a living body recognition result from the processing result of the last feature processing module connected in series. The device can strengthen the biological characteristics in the characteristic data of the face picture, thereby improving the accuracy of face recognition.
A schematic diagram of one embodiment of a feature processing module in a liveness detection device of the present disclosure is shown in FIG. 6. The convolution layer 601 can obtain biological characteristics according to the characteristic data of the human face image
Figure BDA0001859467000000111
And abiotic features
Figure BDA0001859467000000112
The pooling layer 602 can globally pool all features, resulting in pooled features
Figure BDA0001859467000000113
The fully connected processing unit 603 associates the biological features and the non-biological features through the fully connected neural network according to the pooled features, and obtains a processing result.
The device can extract biological features and non-biological features through the convolution layer, and the biological features and the non-biological features are associated through the full-connection layer, so that the attention mechanism based on the channel domain is realized to process the human face image feature data.
A schematic diagram of another embodiment of the processing procedure of the living body detecting device of the present disclosure is shown in fig. 7. The fully-connected processing unit 603 may include a first fully-connected layer, a ReLU, a second fully-connected layer, and a sigmoid function unit connected in series, where the two fully-connected layers associate a biometric characteristic with a non-biometric characteristic to determine whether the picture characteristic is a living characteristic, and the sigmoid function unit obtains a processing result
Figure BDA0001859467000000114
So that it can be determined whether the information in the picture is the living body information or not depending on the processing result. In one embodiment, the output of the sigmoid function may be utilized
Figure BDA0001859467000000115
Carrying out scale scaling processing on each channel in the characteristic data of the face picture, such as using a formula:
Figure BDA0001859467000000116
the scaling is carried out so as to carry out the scaling,
Figure BDA0001859467000000117
the method is a scale operator, and can correspondingly enlarge or reduce the corresponding scale of the channel information of each channel i in the picture characteristics, namely:
F′{i}=F{i}*M{i}
therefore, the biological characteristics in the characteristics can be enhanced under the condition that the image is a living body image, the characteristic data of the optimized human face image is obtained, and the accuracy of human face recognition is improved.
A schematic diagram of one embodiment of a face detection system of the present disclosure is shown in fig. 8. The biopsy device 81 may be any of the biopsy devices mentioned above. The face detection system may further include face recognition means 82 capable of extracting face picture feature data from the face image and supplying the face picture feature data to the living body detection means 81; enhancing biological characteristics in the face image characteristic data by utilizing a living body detection device through a processing result of processing the face image characteristic data based on an attention mechanism of a channel domain to obtain optimized face image characteristic data; and executing face recognition according to the optimized face picture characteristic data.
The living body detection system can apply the characteristic data of the face image extracted by deep learning in face recognition to living body detection, process the characteristic data through an attention mechanism based on a channel domain, recognize the face image as a living body or a non-living body image, and does not need a user to cooperate to make a specified action, so that the convenience and the efficiency are improved, and the accuracy of the living body detection is improved; the biological characteristics in the characteristic data of the face picture can be enhanced, and the accuracy of face recognition is improved.
A schematic structural diagram of one embodiment of the biopsy device of the present disclosure is shown in FIG. 9. The living body detecting apparatus includes a memory 901 and a processor 902. Wherein: the memory 901 may be a magnetic disk, flash memory, or any other non-volatile storage medium. The memory is for storing the instructions in the corresponding embodiments of the liveness detection method above. Processor 902 is coupled to memory 901 and may be implemented as one or more integrated circuits, such as a microprocessor or microcontroller. The processor 902 is used for executing instructions stored in the memory, and can improve the convenience and efficiency of the living body detection and improve the accuracy of the living body detection.
In one embodiment, as also shown in FIG. 10, the liveness detection device 1000 includes a memory 1001 and a processor 1002. The processor 1002 is coupled to the memory 1001 by a BUS 1003. The biopsy device 1000 may also be connected to an external storage device 1005 via a storage interface 1004 for invoking external data, and may also be connected to a network or another computer system (not shown) via a network interface 1006. And will not be described in detail herein.
In the embodiment, the data instruction is stored in the memory, and the instruction is processed by the processor, so that the user can make a specified action without cooperation, the convenience and the efficiency are improved, and the accuracy of the living body detection is improved.
A schematic structural diagram of an embodiment of the face detection system of the present disclosure is shown in fig. 9. The face detection system includes a memory 901 and a processor 902. Wherein: the memory 901 may be a magnetic disk, flash memory, or any other non-volatile storage medium. The memory is for storing the instructions in the corresponding embodiments of the face recognition method above. Processor 902 is coupled to memory 901 and may be implemented as one or more integrated circuits, such as a microprocessor or microcontroller. The processor 902 is configured to execute instructions stored in the memory, and can improve the efficiency and accuracy of the living body detection and improve the accuracy of the face recognition.
In one embodiment, as also shown in fig. 10, the face detection system 1000 includes a memory 1001 and a processor 1002. The processor 1002 is coupled to the memory 1001 by a BUS 1003. The face detection system 1000 may also be coupled to an external storage device 1005 via a storage interface 1004 for facilitating retrieval of external data, and may also be coupled to a network or another computer system (not shown) via a network interface 1006. And will not be described in detail herein.
In the embodiment, the data instructions are stored in the memory and processed by the processor, so that the efficiency and the accuracy of the living body detection can be improved, and the accuracy of the face recognition can be improved.
In another embodiment, a computer-readable storage medium has stored thereon computer program instructions which, when executed by a processor, implement the steps of the method in the corresponding embodiment of the liveness detection method or the face recognition method. As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, apparatus, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Thus far, the present disclosure has been described in detail. Some details that are well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
Finally, it should be noted that: the above examples are intended only to illustrate the technical solutions of the present disclosure and not to limit them; although the present disclosure has been described in detail with reference to preferred embodiments, those of ordinary skill in the art will understand that: modifications to the specific embodiments of the disclosure or equivalent substitutions for parts of the technical features may still be made; all such modifications are intended to be included within the scope of the claims of this disclosure without departing from the spirit thereof.

Claims (18)

1. A method of in vivo detection comprising:
acquiring face picture characteristic data in a face image through deep learning;
processing the facial picture feature data through an attention mechanism based on a channel domain, comprising:
acquiring biological characteristics and non-biological characteristics through a neural network convolution layer according to the face picture characteristic data;
obtaining pooling characteristics through a pooling layer;
according to the pooling feature, the biological feature and the non-biological feature are associated through a fully-connected neural network, and a processing result is obtained;
and determining a living body identification result, wherein the living body identification result comprises that the face image is a living body image or a non-living body image.
2. The method of claim 1, wherein the facial picture feature data is extracted from a facial image by a neural network model of a face recognition system.
3. The method of claim 1, wherein the correlating the biometric characteristic and the non-biometric characteristic via a fully-connected neural network, and obtaining a processing result comprises:
and inputting the pooling feature into a first full-connection layer, and sequentially passing through a linear rectification function, a second full-connection layer and an S-shaped growth curve sigmoid function to obtain the processing result so as to determine the living body identification result according to the processing result.
4. The method of claim 1, wherein the processing the facial picture feature data via a channel domain-based attention mechanism comprises:
processing the face picture characteristic data through an attention mechanism based on a channel domain to obtain primary processing data; and processing the primary processing data through a convolutional neural network and then through an attention mechanism based on a channel domain to obtain the living body identification result.
5. The method of claim 1, wherein the processing the facial picture feature data via a channel domain-based attention mechanism comprises:
processing the face image feature data through an attention mechanism based on a channel domain; and circulating the processing result through the attention mechanism processing based on the channel domain after passing through the convolutional neural network until the number of times of the attention mechanism processing based on the channel domain reaches a preset circulation number, and determining the living body identification result according to the determination processing result.
6. The method of claim 3, further comprising: and carrying out scale scaling processing on each channel in the face picture characteristic data by utilizing the output of the sigmoid function to obtain optimized face picture characteristic data so as to execute face recognition according to the optimized face picture characteristic data.
7. The method of any of claims 1-5, further comprising: enhancing the biological characteristics in the face image characteristic data by using a processing result of processing the face image characteristic data through an attention mechanism based on a channel domain to obtain optimized face image characteristic data;
and executing face recognition according to the optimized face picture characteristic data.
8. A face recognition method, comprising:
extracting face picture characteristic data from a face image through a neural network model;
determining a living body identification result by the living body detection method according to any one of claims 1 to 5;
enhancing the biological characteristics in the face image characteristic data by using a processing result of processing the face image characteristic data through an attention mechanism based on a channel domain to obtain optimized face image characteristic data;
and executing face recognition according to the optimized face picture characteristic data.
9. A living body detection apparatus comprising:
the feature acquisition module is configured to acquire face image feature data in a face image through deep learning;
a feature processing module configured to process the facial picture feature data through a channel domain-based attention mechanism, including:
the convolution layer is configured to acquire biological features and non-biological features according to the face picture feature data;
a pooling layer configured to obtain pooled features;
the full-connection processing unit is configured to associate the biological features and the non-biological features through a full-connection neural network according to the pooling features, and obtain a processing result;
a living body recognition module configured to determine a living body recognition result according to the processing result of the feature processing module, the living body recognition result including whether the face image is a living body image or a non-living body image.
10. The apparatus of claim 9, wherein the feature acquisition module is a neural network model of a face recognition system.
11. The apparatus of claim 9, wherein the fully connected processing unit is configured to:
and inputting the pooling feature into a first full-connection layer, and sequentially passing through a linear rectification function, a second full-connection layer and an S-shaped growth curve sigmoid function to obtain the processing result so as to determine the living body identification result according to the processing result.
12. The device of claim 9, wherein the living body detection device comprises more than two feature processing modules which are connected in series at intervals through a convolutional neural network;
the living body identification module is configured to determine a living body identification result according to a processing result of the last feature processing module in the series.
13. The apparatus of claim 11, wherein the feature processing module is further configured to scale each channel in the face picture feature data using the output of the sigmoid function to obtain optimized face picture feature data, so as to perform face recognition according to the optimized face picture feature data.
14. The device according to any one of claims 9 to 12, wherein the feature processing module is further configured to enhance the biological features in the face image feature data by using the processing result of the feature processing module to obtain optimized face image feature data;
the device further comprises: and the face recognition module is configured to execute face recognition according to the optimized face picture characteristic data.
15. A living body detection apparatus comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the method of any of claims 1-7 based on instructions stored in the memory.
16. A face detection system, comprising:
the biopsy device of any one of claims 9-15; and the combination of (a) and (b),
a face recognition device configured to:
extracting face picture characteristic data from a face image;
enhancing biological characteristics in the face image characteristic data by utilizing a processing result of the living body detection device for processing the face image characteristic data through an attention mechanism based on a channel domain, and acquiring optimized face image characteristic data;
and executing face recognition according to the optimized face picture characteristic data.
17. A face detection system, comprising: a memory; and
a processor coupled to the memory, the processor configured to perform the method of claim 8 based on instructions stored in the memory.
18. A computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 8.
CN201811329008.7A 2018-11-09 2018-11-09 Living body detection method and device, face recognition method and face detection system Active CN109409322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811329008.7A CN109409322B (en) 2018-11-09 2018-11-09 Living body detection method and device, face recognition method and face detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811329008.7A CN109409322B (en) 2018-11-09 2018-11-09 Living body detection method and device, face recognition method and face detection system

Publications (2)

Publication Number Publication Date
CN109409322A CN109409322A (en) 2019-03-01
CN109409322B true CN109409322B (en) 2020-11-24

Family

ID=65472412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811329008.7A Active CN109409322B (en) 2018-11-09 2018-11-09 Living body detection method and device, face recognition method and face detection system

Country Status (1)

Country Link
CN (1) CN109409322B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866454B (en) * 2019-10-23 2023-08-25 智慧眼科技股份有限公司 Face living body detection method and system and computer readable storage medium
CN112597885A (en) * 2020-12-22 2021-04-02 北京华捷艾米科技有限公司 Face living body detection method and device, electronic equipment and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956572A (en) * 2016-05-15 2016-09-21 北京工业大学 In vivo face detection method based on convolutional neural network
CN107392187A (en) * 2017-08-30 2017-11-24 西安建筑科技大学 A kind of human face in-vivo detection method based on gradient orientation histogram
CN107545248A (en) * 2017-08-24 2018-01-05 北京小米移动软件有限公司 Biological characteristic biopsy method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956572A (en) * 2016-05-15 2016-09-21 北京工业大学 In vivo face detection method based on convolutional neural network
CN107545248A (en) * 2017-08-24 2018-01-05 北京小米移动软件有限公司 Biological characteristic biopsy method, device, equipment and storage medium
CN107392187A (en) * 2017-08-30 2017-11-24 西安建筑科技大学 A kind of human face in-vivo detection method based on gradient orientation histogram

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Deep Face Recogniton: A Survey;Mei Wang et al.;《arXiv》;20180928;第1-24页 *
Squeeze-and-Excitation Networks;Jie Hu et al.;《arXiv》;20181025;第1-14页 *

Also Published As

Publication number Publication date
CN109409322A (en) 2019-03-01

Similar Documents

Publication Publication Date Title
US10445562B2 (en) AU feature recognition method and device, and storage medium
CN109711243B (en) Static three-dimensional face in-vivo detection method based on deep learning
CN106557726B (en) Face identity authentication system with silent type living body detection and method thereof
KR102174595B1 (en) System and method for identifying faces in unconstrained media
US9767349B1 (en) Learning emotional states using personalized calibration tasks
CN106557723B (en) Face identity authentication system with interactive living body detection and method thereof
US10534957B2 (en) Eyeball movement analysis method and device, and storage medium
WO2019033572A1 (en) Method for detecting whether face is blocked, device and storage medium
WO2016150240A1 (en) Identity authentication method and apparatus
US11580775B2 (en) Differentiating between live and spoof fingers in fingerprint analysis by machine learning
US9471831B2 (en) Apparatus and method for face recognition
US20120155718A1 (en) Face recognition apparatus and method
JP2015529365A5 (en)
US10489636B2 (en) Lip movement capturing method and device, and storage medium
KR102223478B1 (en) Eye state detection system and method of operating the same for utilizing a deep learning model to detect an eye state
WO2019033570A1 (en) Lip movement analysis method, apparatus and storage medium
JP6351243B2 (en) Image processing apparatus and image processing method
WO2020244071A1 (en) Neural network-based gesture recognition method and apparatus, storage medium, and device
CN109409322B (en) Living body detection method and device, face recognition method and face detection system
CN113298158A (en) Data detection method, device, equipment and storage medium
Olivares-Mercado et al. Face recognition system for smartphone based on lbp
Kumar et al. Palmprint Recognition in Eigen-space
JP7270304B2 (en) Method and mobile device for implementing the method for verifying the identity of a user by identifying an object in an image that has the user's biometric characteristics
JP7360217B2 (en) Method for obtaining data from an image of an object of a user having biometric characteristics of the user
JP2018092272A (en) Biometric authentication apparatus, biometric authentication method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant