CN114596599A - Face recognition living body detection method, device, equipment and computer storage medium - Google Patents

Face recognition living body detection method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN114596599A
CN114596599A CN202011312447.4A CN202011312447A CN114596599A CN 114596599 A CN114596599 A CN 114596599A CN 202011312447 A CN202011312447 A CN 202011312447A CN 114596599 A CN114596599 A CN 114596599A
Authority
CN
China
Prior art keywords
image
body detection
depth
face recognition
living body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011312447.4A
Other languages
Chinese (zh)
Inventor
徐珂
李万鹏
苏慧兰
王丹丹
林立祺
王懋
马莉
李莉
舒敏根
汪帆
李飞龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Information Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Information Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011312447.4A priority Critical patent/CN114596599A/en
Publication of CN114596599A publication Critical patent/CN114596599A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a face recognition living body detection method, a face recognition living body detection device, face recognition living body detection equipment and a computer storage medium, wherein the method comprises the following steps: acquiring a face image and converting the face image into a sequence image; processing the sequence image to obtain a depth image; carrying out depth map classification and identification on the depth image to obtain an image identification result; and obtaining a living body detection result according to the image identification result. According to the embodiment of the application, the face image is acquired and converted to form the sequence image, the sequence image generates the depth image after the depth information prediction calculation, the depth image is classified and detected, and the living body detection is completed.

Description

Face recognition living body detection method, device, equipment and computer storage medium
Technical Field
The application belongs to the technical field of computer vision processing, and particularly relates to a face recognition living body detection method, a face recognition living body detection device, face recognition living body detection equipment and a computer storage medium.
Background
With the development of scientific technology, face recognition technology has been widely used in daily life. In the process of face recognition, lawless persons may infringe the interests of users by means of image fraud and the like, so that living body detection is required to be adopted in the process of face recognition for discrimination and distinction.
Currently, the in vivo detection methods mainly include: fitting type living body detection, silent type living body detection and 3D living body detection. For the existing matching type living body detection, a user needs to match a series of specified facial actions, the unnatural facial behavior verification mode can bring oppression to the user and easily cause the user to feel dislike, and particularly, the facial actions are made in public places, so that the user is easily disturbed; the silent type living body detection is realized according to the characteristics of the image such as texture, but the detection method is easy to break through modes such as video playback, image post-processing and the like, and the anti-attack capability is weak; the 3D in-vivo detection adopts a 3D lens to acquire image depth information, and realizes in-vivo detection through the depth information.
Therefore, how to provide a human face identification living body detection method which does not need user cooperation, has strong attack resistance and is low in cost is a technical problem to be solved by the technical personnel in the field.
Disclosure of Invention
In the living body detection process, the distance between a high-definition photo, a video and other fraud images relative to a lens is equal depth or gradient regularity and equal depth, and after depth estimation network processing, the corresponding depth images are composed of the same pixel value or gradient regularity pixel value and are greatly different from irregular depth images generated by real living bodies.
Based on this, the embodiment of the application provides a face recognition live body detection method, a face recognition live body detection device, face recognition live body detection equipment and a computer storage medium, which can realize face recognition live body detection, do not need user cooperation, have strong anti-attack capability and are low in cost.
In a first aspect, an embodiment of the present application provides a face recognition live body detection method, including:
acquiring a face image and converting the face image into a sequence image; processing the sequence image to obtain a depth image; carrying out depth map classification and identification on the depth image to obtain an image identification result; and obtaining a living body detection result according to the image identification result.
Further, the acquiring the face image and converting the face image into the sequence image includes: and acquiring video images by using the binocular lens, and constructing binocular sequence images according to the time sequence.
Further, processing the sequence image to obtain a depth image, including: and processing the sequence image by using a self-coding network model to obtain a depth image.
Further, the self-coding network model comprises: a consistency loss function, a reconstruction function and an overall loss function;
the consistency loss function is obtained by constructing the consistency loss function error of the sequence image; the reconstruction function is obtained by constructing according to the reconstruction error of the sequence image; the overall loss function is derived from the consistency loss function and the reconstruction function.
Further, the overall loss function is a cross-entropy function.
Further, the depth image is identified according to the overall loss function, and an image identification result is obtained.
Further, depth images are identified according to the Resnet50 network;
the RELU function is used as a hidden layer activation function for the Resnet50 network and the Softmax function is used as a full connectivity layer activation function for the Resnet50 network.
Further, obtaining a living body detection result according to the image recognition result comprises: obtaining a classification recognition result according to the recognition of the depth image; if the classification recognition result meets a preset living body detection threshold value, judging that the living body detection is passed; and if the classification recognition result does not accord with the preset living body detection threshold, judging that the living body detection does not pass.
In a second aspect, an embodiment of the present application provides a face recognition live body detection apparatus, including:
the binocular camera is used for acquiring a face image and converting the face image into a sequence image;
the depth information prediction module is used for processing the sequence image to obtain a depth image;
the depth image classification algorithm module is used for performing classification detection on the depth images to obtain image classification results;
and the result judging module is used for obtaining a living body detection result according to the image classification result.
In a third aspect, an embodiment of the present application provides a face recognition live body detection apparatus, including:
a processor, and a memory storing computer program instructions; the processor reads and executes the computer program instructions to implement the face recognition live body detection method as described above.
In a fourth aspect, embodiments of the present application provide a computer storage medium having computer program instructions stored thereon, which when executed by a processor, implement the face recognition live body detection method as described above.
The face recognition living body detection method, the face recognition living body detection device, the face recognition living body detection equipment and the computer storage medium can acquire and convert the face image to form a sequence image, the sequence image generates a depth image after depth information prediction calculation, and classification detection is carried out on the depth image to finish living body detection.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a face recognition live body detection method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of obtaining a depth image in a face recognition live body detection method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a face recognition live body detection device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a face recognition live body detection device according to an embodiment of the present application.
Detailed Description
Features of various aspects and exemplary embodiments of the present application will be described in detail below, and in order to make objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative only and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by illustrating examples thereof.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The technical scheme provided by the embodiment of the application is mainly applied to the living body detection of face recognition, such as the fields of mobile phone unlocking, face recognition payment and the like; for the existing matching type living body detection, a user needs to match a series of specified facial movements, the unnatural facial behavior verification mode can bring oppression to the user and easily cause the user to feel dislike, and particularly, the facial movements are made in public places, so that the user is easily disturbed; the silent type living body detection is realized according to the characteristics of the image such as texture, but the detection method is easy to break through modes such as video playback, image post-processing and the like, and the anti-attack capability is weak; the 3D in-vivo detection adopts a 3D lens to acquire image depth information, and realizes in-vivo detection through the depth information.
In the living body detection process, the distance between a high-definition photo, a video and other fraud images relative to a lens is equal depth or gradient regularity and equal depth, and after depth estimation network processing, the corresponding depth images are composed of the same pixel value or gradient regularity pixel value and are greatly different from irregular depth images generated by real living bodies.
Based on this, in order to solve the prior art problems, embodiments of the present application provide a face recognition live body detection method, apparatus, device, and computer storage medium.
First, a face recognition live body detection method provided by the embodiment of the present application is described below.
Fig. 1 is a flow chart illustrating a face recognition live body detection method according to an embodiment of the present application; fig. 2 is a schematic flow chart of obtaining a depth image in a face recognition live body detection method according to an embodiment of the present application. As shown in fig. 1, the method may include the steps of:
s1: acquiring a face image and converting the face image into a sequence image;
s2: processing the sequence image to obtain a depth image;
s3: carrying out depth map classification and identification on the depth image to obtain an image identification result;
s4: and obtaining a living body detection result according to the image identification result.
In this embodiment, a sequence image is formed by acquiring and converting a face image, a depth image is generated after the sequence image is subjected to depth information prediction calculation, and the depth image is classified and detected to complete living body detection. Compared with the existing detection method, the method does not need data marking and user cooperation, and has the characteristics of strong attack resistance and low cost.
In an embodiment of the present application, acquiring a face image and converting the face image into a left sequence image and a right sequence image includes:
acquiring video images by using a binocular lens, and constructing binocular sequence images according to a time sequence;
the left lens of the binocular camera shoots a left sequence image, the right lens shoots a right sequence image, and the time sequence of the left sequence image and the time sequence of the right sequence image are consistent;
processing the left sequence image and the right sequence image by using a self-coding network model to respectively obtain a left depth image and a right depth image;
the consistency loss function is obtained by constructing according to the consistency loss function error of the left sequence image and the right sequence image; generating an initial depth map through the constructed self-coding network, and calculating reconstruction errors of the left (right) depth map and the right (left) image respectively;
the reconstruction function is obtained by constructing according to the reconstruction errors of the left sequence image and the right sequence image;
obtaining an overall loss function according to the consistency loss function and the reconstruction function;
and the depth image is input into a depth map classification algorithm to realize the classification of the depth image, namely, the living body detection is completed.
In this embodiment, a video acquired by a binocular camera constructs a multi-frame binocular RGB image according to a time sequence, and constructs a parallel unsupervised self-coding model for estimating face depth information. According to the constructed left sequence image and the right sequence image, an initial depth map can be generated through the constructed self-coding network respectively, so that the calculation of the consistency loss error of the subsequent self-coding network reconstruction error is facilitated.
Specifically, the above embodiments can be divided into three major parts: the device comprises a data preprocessing part, a network structure part and a loss function constructing part.
(1) A data preprocessing part:
the binocular sequence images are constructed through videos collected by the binocular lenses, the left sequence images are shot by the left lens, the right sequence images are shot by the right lens, and the time sequence of the left sequence images is consistent with that of the right sequence images.
(2) The network structure part:
the network structure is a self-coding network structure, comprising: an encoding network and a decoding network. The coding network adopts an LSTM network and comprises a convolution layer, a pooling layer and the like; the network framework adopted by the decoding network comprises an deconvolution layer, an upsampling layer and the like, and the whole network input layer and the whole network output layer are equal in a certain dimension. The decoding layer adopts a cascade mode to obtain more characteristics, such as a network structure of a 3-layer decoding layer: the third layer coding layer and the first layer decoding layer are cascaded to form a second layer decoding layer, the second layer coding layer and the second layer decoding layer are cascaded to form a third layer coding layer, and the decoding layer can be cascaded with coding layers and decoding layers of different layers.
(3) The loss function constructing part:
the loss function is divided into two parts: left and right consistency loss functions, reconstruction loss functions.
Left and right consistency loss function: the left and right sequence images respectively generate corresponding depth maps DL through the self-coding network, DR, DL (i, j) represents that the left image generates depth values through the self-coding model, DR (i, j) represents that the right image generates depth values through the self-coding model, wherein (i, j) represents coordinates of pixel points. The left and right consistency loss function is as in equation (1):
Figure RE-GDA0002934537950000061
reconstruction loss function: the left image IL and the right image IR generate a right left sequence image IL and an right left sequence image IR corresponding to the left sequence image and the right sequence image respectively through Spatial Transformer Networks. The reconstruction loss function is as in equation (2):
Figure BDA0002790226580000062
the overall loss function is formula (3):
C=αCLR+βCREC (3)
in one embodiment of the present application, the left depth image and the right depth image are identified according to a Resnet50 network;
a RELU function is used as a hidden layer activation function of the Resnet50 network and a Softmax function is used as a full connectivity layer activation function of the Resnet50 network.
Because the pixel values of the depth images generated by the binocular depth information prediction module of the ultra-clear plane images, the high-clear playback videos and other fraudulent images are subjected to gradual gradient transformation or are unchanged, the pixel value difference is small; the biological living body has obvious living body characteristic information through the pixel value of the depth image generated by the binocular depth information prediction module, and the obvious characteristics of the plane image and the depth image of the biological living body can effectively judge whether the biological living body is a living body, so that the binary algorithm of the living body depth image is constructed.
The left area of the image shot by the left camera and the right area shot by the right camera are shielded, the depth image generated by calculation is inaccurate in the shielded area, the inaccurate depth image is cut out to be used as an input image of a classification algorithm, and the depth image corresponding to the left camera or the right camera is selected to be used as the input of the classification algorithm.
The classification network of the depth image can adopt a Resnet50 network which comprises a convolutional layer, a full link layer and the like, adopts a RELU as a hidden layer activation function, and adopts a Softmax function as a full link layer activation function. The fully connected layers may be binned using a cross-entropy function. It should be noted that the neural network algorithms such as the "Resnet 50 network" and the "RELU" Softmax function "provided in the present embodiment are exemplary embodiments of the present embodiment, and are not exclusive embodiments of the present application.
In the embodiment of the application, the binocular depth prediction algorithm and the depth map classification algorithm are combined to realize the in-vivo detection, and according to the depth image generated by the depth estimation algorithm, high-definition images and other fraudulent behaviors can be effectively distinguished, the accuracy of the in-vivo detection algorithm is effectively improved, and the user identity verification and identification are safer. Because the left and right images input in the depth estimation algorithm are shielded, the depth image generated by calculation also has corresponding shielding, after the shielded area is processed, the classified identification of the images is carried out, the detection accuracy can be effectively improved, meanwhile, in the detection process, a user does not need to cooperate to do specific actions, and the cost of the binocular camera is lower compared with that of a 3D camera and the like.
Fig. 3 is a schematic structural diagram of a face recognition live body detection device according to an embodiment of the present application. As shown in fig. 3, the apparatus may include a binocular camera 210, a depth information prediction module 220, a depth map classification algorithm module 230, and a result judgment module 240.
The binocular camera 210 is used for acquiring a face image and converting the face image into a sequence image;
a depth information prediction module 220, configured to process the sequence image to obtain a depth image;
a depth map classification algorithm module 230, configured to perform classification detection on the depth image to obtain an image classification result;
and a result judgment module 240, configured to obtain a living body detection result according to the image classification result.
The human face living body detection device provided by the embodiment comprises a binocular camera, a depth information prediction module, a depth map classification algorithm module and a result judgment module.
Firstly, images are shot through a binocular camera (wherein binocular lenses are corrected RGB (red, green and blue) lenses of the same type), a (left and right) sequence image is formed in a conversion mode and is input into a binocular depth information prediction module, a depth image is generated after the depth information prediction module calculates, the depth image is input into a depth image classification algorithm module to achieve classification of the depth image, and therefore the in-vivo detection is completed. Any group of images (in left and right images) captured by binocular shooting can automatically generate a depth image through a trained depth prediction network and a depth image classification algorithm module, so that depth image classification detection is realized, and a result is obtained through result judgment, namely, in-vivo detection is realized.
Fig. 4 shows a hardware structure schematic diagram of a face recognition live body detection device provided in an embodiment of the present application.
The face recognition liveness detection device may comprise a processor 301 and a memory 302 in which computer program instructions are stored.
Specifically, the processor 301 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement the embodiments of the present Application.
Memory 302 may include mass storage for data or instructions. By way of example, and not limitation, memory 302 may include a Hard Disk Drive (HDD), a floppy Disk Drive, flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. In one example, memory 302 can include removable or non-removable (or fixed) media, or memory 302 is non-volatile solid-state memory. The memory 302 may be internal or external to the integrated gateway disaster recovery device.
In one example, the Memory 302 may be a Read Only Memory (ROM). In one example, the ROM may be mask programmed ROM, programmable ROM (prom), erasable prom (eprom), electrically erasable prom (eeprom), electrically rewritable ROM (earom), or flash memory, or a combination of two or more of these.
The memory 302 may include Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., a memory device) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors), it is operable to perform the operations described with reference to the method according to an aspect of the disclosure.
The processor 301 reads and executes the computer program instructions stored in the memory 302 to implement the method steps S1 to S4 in the embodiment shown in fig. 1, and achieve the corresponding technical effect achieved by the embodiment shown in fig. 1 executing the method steps, which is not described herein again for brevity.
In one example, the face recognition liveness detection device may also include a communication interface 303 and a bus 310. As shown in fig. 3, the processor 301, the memory 302, and the communication interface 303 are connected via a bus 310 to complete communication therebetween.
The communication interface 303 is mainly used for implementing communication between modules, apparatuses, units and/or devices in the embodiment of the present application.
Bus 310 includes hardware, software, or both to couple the components of the online data traffic billing device to one another. By way of example, and not limitation, a Bus may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (Front Side Bus, FSB), a Hyper Transport (HT) interconnect, an Industry Standard Architecture (ISA) Bus, an infiniband interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a Micro Channel Architecture (MCA) Bus, a Peripheral Component Interconnect (PCI) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a video electronics standards association local (VLB) Bus, or other suitable Bus or a combination of two or more of these. Bus 310 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
In addition, in combination with the face recognition live body detection method in the foregoing embodiment, the embodiment of the present application may provide a computer storage medium to implement. The computer storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement any of the face recognition liveness detection methods in the above embodiments.
It is to be understood that the application is not limited to the particular arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic Circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments can be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware for performing the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As described above, only the specific embodiments of the present application are provided, and those skilled in the art can clearly understand that, for convenience and simplicity of description, specific working processes of the system, the module and the unit described above may refer to corresponding processes in the foregoing method embodiments, and details are not described herein again. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application.

Claims (11)

1. A face recognition live body detection method is characterized by comprising the following steps:
acquiring a face image and converting the face image into a sequence image;
processing the sequence image to obtain a depth image;
carrying out depth map classification and identification on the depth image to obtain an image identification result;
and obtaining a living body detection result according to the image identification result.
2. The living body detection method for face recognition according to claim 1, wherein the acquiring the face image and converting the face image into the sequence image comprises:
and collecting video images by using the binocular lens, and constructing binocular sequence images according to a time sequence.
3. The face recognition live body detection method according to claim 1, wherein processing the sequence image to obtain a depth image comprises:
and processing the sequence image by using a self-coding network model to obtain a depth image.
4. The face recognition live body detection method according to claim 3, wherein the self-coding network model comprises: a consistency loss function, a reconstruction function and an overall loss function;
the consistency loss function is obtained by constructing the consistency loss function error of the sequence image;
the reconstruction function is obtained by constructing according to the reconstruction error of the sequence image;
the overall loss function is obtained according to the consistency loss function and the reconstruction function.
5. The face recognition live body detection method according to claim 4, comprising:
the overall loss function is a cross entropy function.
6. The face recognition live body detection method according to any one of claims 1 or 4, comprising:
and identifying the depth image according to the overall loss function to obtain an image identification result.
7. The face recognition live body detection method according to any one of claims 1 or 6, wherein the recognizing the depth image comprises:
identifying the depth image according to a Resnet50 network;
the RELU function is used as a hidden layer activation function of the Resnet50 network, and the Softmax function is used as a full connectivity layer activation function of the Resnet50 network.
8. The face recognition live body detection method according to any one of claims 1 or 7, wherein obtaining a live body detection result according to the image recognition result comprises:
obtaining a classification recognition result according to the recognition of the depth image;
if the classification recognition result meets a preset living body detection threshold value, judging that the living body detection is passed;
and if the classification recognition result does not accord with the preset living body detection threshold, judging that the living body detection does not pass.
9. A face recognition live body detection apparatus, characterized in that the apparatus comprises:
the binocular camera is used for acquiring a face image and converting the face image into a sequence image;
the depth information prediction module is used for processing the sequence image to obtain a depth image;
the depth map classification algorithm module is used for performing classification detection on the depth image to obtain an image classification result;
and the result judgment module is used for obtaining a living body detection result according to the image classification result.
10. A face recognition live body detecting apparatus, characterized in that the apparatus comprises: a processor, and a memory storing computer program instructions; the processor reads and executes the computer program instructions to implement the face recognition live body detection method according to any one of claims 1 to 9.
11. A computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement the face recognition liveness detection method of any one of claims 1-9.
CN202011312447.4A 2020-11-20 2020-11-20 Face recognition living body detection method, device, equipment and computer storage medium Pending CN114596599A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011312447.4A CN114596599A (en) 2020-11-20 2020-11-20 Face recognition living body detection method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011312447.4A CN114596599A (en) 2020-11-20 2020-11-20 Face recognition living body detection method, device, equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN114596599A true CN114596599A (en) 2022-06-07

Family

ID=81802718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011312447.4A Pending CN114596599A (en) 2020-11-20 2020-11-20 Face recognition living body detection method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN114596599A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023010A (en) * 2015-08-17 2015-11-04 中国科学院半导体研究所 Face living body detection method and system
CN106897675A (en) * 2017-01-24 2017-06-27 上海交通大学 The human face in-vivo detection method that binocular vision depth characteristic is combined with appearance features
CN108764091A (en) * 2018-05-18 2018-11-06 北京市商汤科技开发有限公司 Biopsy method and device, electronic equipment and storage medium
US20190034702A1 (en) * 2017-07-26 2019-01-31 Baidu Online Network Technology (Beijing) Co., Ltd. Living body detecting method and apparatus, device and storage medium
CN109712228A (en) * 2018-11-19 2019-05-03 中国科学院深圳先进技术研究院 Establish method, apparatus, electronic equipment and the storage medium of Three-dimension Reconstruction Model
CN110674759A (en) * 2019-09-26 2020-01-10 深圳市捷顺科技实业股份有限公司 Monocular face in-vivo detection method, device and equipment based on depth map
CN111382607A (en) * 2018-12-28 2020-07-07 北京三星通信技术研究有限公司 Living body detection method and device and face authentication system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105023010A (en) * 2015-08-17 2015-11-04 中国科学院半导体研究所 Face living body detection method and system
CN106897675A (en) * 2017-01-24 2017-06-27 上海交通大学 The human face in-vivo detection method that binocular vision depth characteristic is combined with appearance features
US20190034702A1 (en) * 2017-07-26 2019-01-31 Baidu Online Network Technology (Beijing) Co., Ltd. Living body detecting method and apparatus, device and storage medium
CN108764091A (en) * 2018-05-18 2018-11-06 北京市商汤科技开发有限公司 Biopsy method and device, electronic equipment and storage medium
CN109712228A (en) * 2018-11-19 2019-05-03 中国科学院深圳先进技术研究院 Establish method, apparatus, electronic equipment and the storage medium of Three-dimension Reconstruction Model
CN111382607A (en) * 2018-12-28 2020-07-07 北京三星通信技术研究有限公司 Living body detection method and device and face authentication system
CN110674759A (en) * 2019-09-26 2020-01-10 深圳市捷顺科技实业股份有限公司 Monocular face in-vivo detection method, device and equipment based on depth map

Similar Documents

Publication Publication Date Title
CN109118470B (en) Image quality evaluation method and device, terminal and server
CN111225611B (en) Systems and methods for facilitating analysis of wounds in a target object
CN111488756A (en) Face recognition-based living body detection method, electronic device, and storage medium
CN104680510A (en) RADAR parallax image optimization method and stereo matching parallax image optimization method and system
CN110674800B (en) Face living body detection method and device, electronic equipment and storage medium
CN112767279B (en) Underwater image enhancement method for generating countermeasure network based on discrete wavelet integration
CN111597933B (en) Face recognition method and device
CN113936302B (en) Training method and device for pedestrian re-recognition model, computing equipment and storage medium
CN108875907B (en) Fingerprint identification method and device based on deep learning
CN110532746B (en) Face checking method, device, server and readable storage medium
CN111914762A (en) Gait information-based identity recognition method and device
CN111067522A (en) Brain addiction structural map assessment method and device
CN110121109A (en) Towards the real-time source tracing method of monitoring system digital video, city video monitoring system
CN114612987A (en) Expression recognition method and device
CN114677722A (en) Multi-supervision human face in-vivo detection method integrating multi-scale features
CN111639545A (en) Face recognition method, device, equipment and medium
KR20140074905A (en) Identification by iris recognition
CN114596599A (en) Face recognition living body detection method, device, equipment and computer storage medium
CN116524609A (en) Living body detection method and system
CN116012875A (en) Human body posture estimation method and related device
CN115546909A (en) Living body detection method and device, access control system, equipment and storage medium
CN111597896B (en) Abnormal face recognition method, recognition device, recognition apparatus, and storage medium
CN113780492A (en) Two-dimensional code binarization method, device and equipment and readable storage medium
CN113989870A (en) Living body detection method, door lock system and electronic equipment
CN114004974A (en) Method and device for optimizing images shot in low-light environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination