CN112580395A - Depth information-based 3D face living body recognition method, system, device and medium - Google Patents

Depth information-based 3D face living body recognition method, system, device and medium Download PDF

Info

Publication number
CN112580395A
CN112580395A CN201910931633.7A CN201910931633A CN112580395A CN 112580395 A CN112580395 A CN 112580395A CN 201910931633 A CN201910931633 A CN 201910931633A CN 112580395 A CN112580395 A CN 112580395A
Authority
CN
China
Prior art keywords
face
depth information
living body
information
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910931633.7A
Other languages
Chinese (zh)
Inventor
陈荡荡
段兴
朱力
吕方璐
汪博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Guangjian Technology Co Ltd
Original Assignee
Shenzhen Guangjian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Guangjian Technology Co Ltd filed Critical Shenzhen Guangjian Technology Co Ltd
Priority to CN201910931633.7A priority Critical patent/CN112580395A/en
Publication of CN112580395A publication Critical patent/CN112580395A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a 3D face living body identification method, a system, equipment and a medium based on depth information, which comprises the following steps: collecting a plurality of pieces of face image information, wherein each piece of face image information comprises face depth information and face frame coordinate information; inputting the face depth information and the face frame coordinate information into a preset neural network model to train a face living body recognition model; and identifying the object to be identified through the face living body identification model, generating an identification result, and displaying the identification result. According to the invention, the face depth information and the face frame coordinate information are simultaneously input into the input layer of the neural network model, and the face depth information is subjected to face region extraction and data preprocessing in the input layer, so that the training of the face living body recognition model is realized, the preparation recognition of the face living body can be realized, and the recognition accuracy rate reaches 99.6%.

Description

Depth information-based 3D face living body recognition method, system, device and medium
Technical Field
The present invention relates to living human face recognition, and in particular, to a 3D living human face recognition method, system, device, and medium based on depth information.
Background
The human face living body recognition means that whether an input human face is real or is forged in various ways is recognized through a computer. The current common face living body identification methods include: based on motion instructions, based on facial texture analysis, based on infrared information, based on depth information, and the like. The approach based on motion commands, while effective, is poor in user experience. The method based on the facial texture analysis is quick and convenient, but the accuracy rate is low. The modes based on the infrared information and the depth information respectively need the assistance of an infrared camera and a depth camera, and the accuracy rate is very high. Therefore, the depth information-based mode is a research hotspot of the current face living body recognition technology.
Convolutional Neural Networks (CNN) are a type of feed-forward Neural network that includes convolution calculations and has a deep structure, and are one of the representative algorithms for deep learning. The input layer of the conventional CNN needs to convert the trained image data into matrix data and then send the matrix data into the CNN network. The input layer retains original data to the maximum extent, but has the defects of large data volume and slow training speed. In the process of face living body recognition, if the whole image is sent to a neural network for training, the depth information of non-face regions such as background, sundries and the like will affect the result of the living body recognition, so that the generation of interference data needs to be reduced as much as possible.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a 3D face living body identification method, a system, equipment and a medium based on depth information.
The 3D face living body identification method based on the depth information provided by the invention comprises the following steps:
step S1: collecting a plurality of pieces of face image information, wherein each piece of face image information comprises face depth information and face frame coordinate information;
step S2: inputting the face depth information and the face frame coordinate information into a preset neural network model to train a face living body recognition model;
step S3: and identifying the object to be identified through the face living body identification model, generating an identification result, and displaying the identification result.
Preferably, the step S1 includes the steps of:
step S101: collecting RGB images and face depth information of each face in multiple directions, and converting the face depth information into visual depth images;
step S102: acquiring a first face frame position in the RGB image, and acquiring a second face frame position in the depth image by aligning the depth image with the corresponding RGB image;
step S103: and determining the coordinate information of the face frame according to the position of the second face frame.
Preferably, the step S2 includes the steps of:
step S201: inputting the face depth information and the face frame coordinate information into an input layer of a preset neural network model;
step S202: the input layer extracts face region depth information through face depth information and face frame coordinate information, and then sequentially carries out random processing and normalization processing on the face region depth information to generate target depth information;
step S203: and inputting the target depth information into a convolution layer of the neural network model, thereby realizing the training of the human face living body recognition model.
Preferably, the step S3 includes the steps of:
step S301: collecting a video stream for the object to be identified;
step S302: extracting a single-frame image from the video stream, and acquiring position information of a face region from the single-frame image;
step S303: identifying the face region through a face living body identification model, judging the authenticity of the face region, and marking a face identification frame and an identification result on the single-frame image;
step S304: and displaying the single-frame image on a front-end display page in a video stream mode.
Preferably, the random treatment comprises any one or more of the following treatment modes:
-left and right rotation;
-mirror flipping;
-gaussian blur;
-edge cropping.
Preferably, the plurality of directions include any of a front side, a left side, a right side, an upper side, a lower side, an upper left side, an upper right side, a lower left side, and a lower right side.
Preferably, the step S203 includes the steps of:
step S2031: the first convolution layer becomes 8x31x31 data after convolving the input target depth information of 64x64, and the first pooling layer reduces the dimension of the 8x31x31 data to 8x16x16 data;
step S2032: the second convolutional layer and the second pooling layer further convolve and reduce the dimension of the data of 8x16x16 to generate data of 128x4x 4;
step S2033: the first full-link outputs 128 one-dimensional vectors from 128 × 4 × 4 data, and the 2 one-dimensional vectors output from the second full-link layer are classified and calculated according to the 128 one-dimensional vectors output from the first full-link layer, that is, whether the output is a real face or not is determined.
The invention provides a depth information-based 3D face living body recognition system, which is used for realizing the depth information-based 3D face living body recognition method and comprises the following steps:
the image acquisition module is used for acquiring a plurality of pieces of face image information, and each piece of face image information comprises face depth information and face frame coordinate information;
the model training module is used for inputting the face depth information and the face frame coordinate information into a preset neural network model to train a face living body recognition model;
and the face recognition module is used for recognizing the object to be recognized through the face living body recognition model and generating a recognition result, and then displaying the recognition result.
The invention provides a 3D face living body recognition device based on depth information, which comprises:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to perform the steps of the depth information based 3D face live recognition method via execution of the executable instructions.
The invention provides a computer readable storage medium for storing a program, which when executed implements the steps of the depth information based 3D face living body recognition method.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the face depth information and the face frame coordinate information are simultaneously input into the input layer of the neural network model, and the face depth information is subjected to face region extraction and data preprocessing in the input layer, so that the training of the face living body recognition model is realized, the preparation recognition of the face living body can be realized, and the recognition accuracy rate reaches 99.6%.
The face living body recognition module can be applied to the aspects of high requirements on the face recognition safety, such as face gates, human certificate verification, face check-in and the like, and the safety and the accuracy of a face recognition system are ensured by carrying out living body recognition on the face.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts. Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flowchart illustrating steps of a 3D face living body recognition method based on depth information according to an embodiment of the present invention;
FIG. 2 is a flowchart of the steps for generating face frame coordinate information according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps of training a face living body recognition model according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a process of face recognition by a face living body recognition model according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating the steps of training a face recognition model using a convolutional neural network according to an embodiment of the present invention;
FIG. 6 is a schematic block diagram of a 3D face living body recognition system based on depth information according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a 3D face living body recognition device based on depth information in an embodiment of the present invention; and
fig. 8 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The invention provides a 3D face living body identification method based on depth information, and aims to solve the problems in the prior art.
The following describes the technical solutions of the present invention and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of steps of a depth information-based 3D face living body recognition method in an embodiment of the present invention, and as shown in fig. 1, the depth information-based 3D face living body recognition method provided by the present invention includes the following steps:
step S1: and acquiring a plurality of pieces of face image information, wherein each piece of face image information comprises face depth information and face frame coordinate information.
In the embodiment of the invention, the face image information is acquired by adopting a depth camera, and the depth camera can acquire an RGB image, an Infrared (IR) image and depth information at the same time, wherein the RGB image and the depth information are data required in the embodiment of the invention.
Fig. 2 is a flowchart of a step of generating face frame coordinate information according to an embodiment of the present invention, and as shown in fig. 2, the step S1 includes the following steps:
step S101: collecting RGB images and face depth information of each face in multiple directions, and converting the face depth information into visual depth images;
step S102: acquiring a first face frame position in the RGB image, and acquiring a second face frame position in the depth image by aligning the depth image with the corresponding RGB image;
step S103: and determining the coordinate information of the face frame according to the position of the second face frame.
In an embodiment of the present invention, the plurality of directions include any of a front side, a left side, a right side, an upper side, a lower side, an upper left side, an upper right side, a lower left side, and a lower right side. The face depth information reflects the spatial features of the face, so that complete face depth information needs to be acquired during data acquisition. Most human face features such as eyes, a nose, a mouth and the like which can be obtained by collecting depth information on the front face of a human face can be obtained, but due to the limitation of a depth camera, holes can be formed in the depth information on the edge part of the human face, so that the collected depth information cannot reflect the real situation of the human face. The human face also contains a large amount of depth information in the areas of ears, jaws, forehead and the like, so that the depth information is acquired from 8 directions except the front face. And when a plurality of human faces exist in the RGB image, selecting the human face with the largest area for output when the first human face frame position is obtained in the RGB image.
In the embodiment of the invention, 300 pieces of face image information are used as training samples, and each piece of face image information comprises 9 postures. In addition, negative samples are collected by printing pictures and displaying images on an iPad screen in the embodiment of the invention. 300 negative samples are collected and mainly divided into black and white photo printing, color photo printing, black and white photo on iPad and color photo.
Step S2: inputting the face depth information and the face frame coordinate information into a preset neural network model to train a face living body recognition model;
fig. 3 is a flowchart of steps of training a living human face recognition model according to an embodiment of the present invention, and as shown in fig. 3, the step S2 includes the following steps:
step S201: inputting the face depth information and the face frame coordinate information into an input layer of a preset neural network model;
step S202: the input layer extracts face region depth information through face depth information and face frame coordinate information, and then sequentially carries out random processing and normalization processing on the face region depth information to generate target depth information;
step S203: and inputting the target depth information into a convolution layer of the neural network model, thereby realizing the training of the human face living body recognition model.
In the embodiment of the present invention, the size of the region of the face depth information is about 100 pixels, and in order to keep more feature information and ensure the training and recognition speed of the model, the size of the depth information after the normalization processing is 64 × 64.
In an embodiment of the present invention, the random treatment includes any one or more of the following treatment modes:
-left and right rotation;
-mirror flipping;
-gaussian blur;
-edge cropping.
In the embodiment of the invention, the face region depth information in the training sample is randomly processed in various ways, so that the complexity of the training sample is greatly improved, and the robustness of the model is improved, therefore, the face living body recognition model can extract and classify the features of complex scenes and pages.
Fig. 4 is a flowchart of the steps of training a face recognition model through a convolutional neural network in the embodiment of the present invention, and as shown in fig. 4, the step S203 includes the following steps:
step S2031: the first convolution layer becomes 8x31x31 data after convolving the input target depth information of 64x64, and the first pooling layer reduces the dimension of the 8x31x31 data to 8x16x16 data;
step S2032: the second convolutional layer and the second pooling layer further convolve and reduce the dimension of the data of 8x16x16 to generate data of 128x4x 4;
step S2033: the first full-link outputs 128 one-dimensional vectors from 128 × 4 × 4 data, and the 2 one-dimensional vectors output from the second full-link layer are classified and calculated according to the 128 one-dimensional vectors output from the first full-link layer, that is, whether the output is a real face or not is determined.
Step S3: and identifying the object to be identified through the face living body identification model, generating an identification result, and displaying the identification result.
Fig. 5 is a flowchart of steps of performing face recognition by using a living face recognition model in the embodiment of the present invention, and as shown in fig. 5, the step S3 includes the following steps:
step S301: collecting a video stream for the object to be identified;
step S302: extracting a single-frame image from the video stream, and acquiring position information of a face region from the single-frame image;
step S303: identifying the face region through a face living body identification model, judging the authenticity of the face region, and marking a face identification frame and an identification result on the single-frame image;
step S304: and displaying the single-frame image on a front-end display page in a video stream mode.
In the embodiment of the invention, three different machine learning models are trained and contrastively analyzed, so that it can be found that, although the training speed is high and the processing flow is relatively simple, the finally obtained training result is weaker than the deep learning result based on the convolutional neural network, and the effect of the linear SVM model is better than that of the logistic regression on the whole, as the logistic regression and linear SVM are used as the traditional machine learning method. The human face living body recognition model in the embodiment of the invention is obtained through 10000 times of iterative training. By increasing the complexity of the face sample for multiple times, such as horizontal rotation, mirror inversion, edge clipping, changing the distance between the face region and the camera, and the like, the recognition accuracy of the obtained living human face recognition model is 0.996875.
Fig. 6 is a schematic block diagram of a depth information-based 3D face living body recognition system in an embodiment of the present invention, and as shown in fig. 6, the depth information-based 3D face living body recognition system provided in the present invention is configured to implement the depth information-based 3D face living body recognition method, and includes:
the image acquisition module is used for acquiring a plurality of pieces of face image information, and each piece of face image information comprises face depth information and face frame coordinate information;
the model training module is used for inputting the face depth information and the face frame coordinate information into a preset neural network model to train a face living body recognition model;
and the face recognition module is used for recognizing the object to be recognized through the face living body recognition model and generating a recognition result, and then displaying the recognition result.
The embodiment of the invention also provides a 3D face living body recognition device based on the depth information, which comprises a processor. A memory having stored therein executable instructions of the processor. Wherein the processor is configured to perform the steps of the depth information based 3D face live recognition image method via execution of executable instructions.
As described above, the embodiment can simultaneously input the input layer of the neural network model through the face depth information and the face frame coordinate information, and perform face region extraction and data preprocessing on the face depth information in the input layer to realize training of the face living body recognition model, thereby realizing the preparation recognition of the face living body and enabling the recognition accuracy to reach 99.6%.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" platform.
Fig. 7 is a schematic structural diagram of a 3D living human face recognition device based on depth information in an embodiment of the present invention. An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 7. The electronic device 600 shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 7, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 630 connecting the different platform components (including the memory unit 620 and the processing unit 610), a display unit 640, etc.
Wherein the storage unit stores program code which can be executed by the processing unit 610 to cause the processing unit 610 to perform the steps according to various exemplary embodiments of the present invention described in the above section of the depth information based 3D living human face recognition method of the present specification. For example, processing unit 610 may perform the steps as shown in fig. 1.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 via the bus 630. It should be appreciated that although not shown in FIG. 7, other hardware and/or software modules may be used in conjunction with electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
The embodiment of the invention also provides a computer readable storage medium for storing a program, and the program realizes the steps of the 3D face living body identification image method based on the depth information when being executed. In some possible embodiments, aspects of the present invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the present invention as described in the section of the depth information based 3D living human face recognition method above in this specification, when the program product is run on the terminal device.
As shown above, when the program of the computer-readable storage medium of this embodiment is executed, the face depth information and the face frame coordinate information are simultaneously input to the input layer of the neural network model, and the face depth information is subjected to face region extraction and data preprocessing in the input layer, so as to implement training of the face living body recognition model, thereby implementing the preparation recognition of the face living body, and enabling the recognition accuracy to reach 99.6%.
Fig. 8 is a schematic structural diagram of a computer-readable storage medium in an embodiment of the present invention. Referring to fig. 8, a program product 800 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In the embodiment of the invention, the human face depth information and the human face frame coordinate information are simultaneously input into the input layer of the neural network model, and the human face depth information is subjected to human face region extraction and data preprocessing in the input layer to realize the training of the human face living body recognition model, so that the preparation recognition of the human face living body can be realized, and the recognition accuracy rate reaches 99.6%. The face living body recognition module can be applied to the aspects of high requirements on the face recognition safety, such as face gates, human certificate verification, face check-in and the like, and the safety and the accuracy of a face recognition system are ensured by carrying out living body recognition on the face.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (10)

1. A3D face living body identification method based on depth information is characterized by comprising the following steps:
step S1: collecting a plurality of pieces of face image information, wherein each piece of face image information comprises face depth information and face frame coordinate information;
step S2: inputting the face depth information and the face frame coordinate information into a preset neural network model to train a face living body recognition model;
step S3: and identifying the object to be identified through the face living body identification model, generating an identification result, and displaying the identification result.
2. The depth information based 3D face living body recognition method according to claim 1, wherein the step S1 comprises the steps of:
step S101: collecting RGB images and face depth information of each face in multiple directions, and converting the face depth information into visual depth images;
step S102: acquiring a first face frame position in the RGB image, and acquiring a second face frame position in the depth image by aligning the depth image with the corresponding RGB image;
step S103: and determining the coordinate information of the face frame according to the position of the second face frame.
3. The depth information based 3D face living body recognition method according to claim 1, wherein the step S2 comprises the steps of:
step S201: inputting the face depth information and the face frame coordinate information into an input layer of a preset neural network model;
step S202: the input layer extracts face region depth information through face depth information and face frame coordinate information, and then sequentially carries out random processing and normalization processing on the face region depth information to generate target depth information;
step S203: and inputting the target depth information into a convolution layer of the neural network model, thereby realizing the training of the human face living body recognition model.
4. The depth information based 3D face living body recognition method according to claim 1, wherein the step S3 comprises the steps of:
step S301: collecting a video stream for the object to be identified;
step S302: extracting a single-frame image from the video stream, and acquiring position information of a face region from the single-frame image;
step S303: identifying the face region through a face living body identification model, judging the authenticity of the face region, and marking a face identification frame and an identification result on the single-frame image;
step S304: and displaying the single-frame image on a front-end display page in a video stream mode.
5. The 3D face living body recognition method based on the depth information as claimed in claim 3, wherein the random processing comprises any one or more of the following processing modes:
-left and right rotation;
-mirror flipping;
-gaussian blur;
-edge cropping.
6. The depth information-based 3D face live recognition method according to claim 1, wherein the plurality of directions include any of a front side, a left side, a right side, an upper side, a lower side, an upper left side, an upper right side, a lower left side, and a lower right side.
7. The depth information-based 3D face living body recognition method according to claim 3, wherein the step S203 comprises the steps of:
step S2031: the first convolution layer becomes 8x31x31 data after convolving the input target depth information of 64x64, and the first pooling layer reduces the dimension of the 8x31x31 data to 8x16x16 data;
step S2032: the second convolutional layer and the second pooling layer further convolve and reduce the dimension of the data of 8x16x16 to generate data of 128x4x 4;
step S2033: the first full-link outputs 128 one-dimensional vectors from 128 × 4 × 4 data, and the 2 one-dimensional vectors output from the second full-link layer are classified and calculated according to the 128 one-dimensional vectors output from the first full-link layer, that is, whether the output is a real face or not is determined.
8. A depth information-based 3D face living body recognition system for realizing the depth information-based 3D face living body recognition method according to any one of claims 1 to 7, comprising:
the image acquisition module is used for acquiring a plurality of pieces of face image information, and each piece of face image information comprises face depth information and face frame coordinate information;
the model training module is used for inputting the face depth information and the face frame coordinate information into a preset neural network model to train a face living body recognition model;
and the face recognition module is used for recognizing the object to be recognized through the face living body recognition model and generating a recognition result, and then displaying the recognition result.
9. A3D face living body recognition device based on depth information, comprising:
a processor;
a memory having stored therein executable instructions of the processor;
wherein the processor is configured to execute the steps of the depth information based 3D face live recognition method according to any one of claims 1 to 7 via execution of the executable instructions.
10. A computer-readable storage medium storing a program, wherein the program is configured to implement the steps of the depth information based 3D face live recognition method according to any one of claims 1 to 7 when executed.
CN201910931633.7A 2019-09-29 2019-09-29 Depth information-based 3D face living body recognition method, system, device and medium Pending CN112580395A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910931633.7A CN112580395A (en) 2019-09-29 2019-09-29 Depth information-based 3D face living body recognition method, system, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910931633.7A CN112580395A (en) 2019-09-29 2019-09-29 Depth information-based 3D face living body recognition method, system, device and medium

Publications (1)

Publication Number Publication Date
CN112580395A true CN112580395A (en) 2021-03-30

Family

ID=75111051

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910931633.7A Pending CN112580395A (en) 2019-09-29 2019-09-29 Depth information-based 3D face living body recognition method, system, device and medium

Country Status (1)

Country Link
CN (1) CN112580395A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221767A (en) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 Method for training living body face recognition model and method for recognizing living body face and related device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127170A (en) * 2016-07-01 2016-11-16 重庆中科云丛科技有限公司 A kind of merge the training method of key feature points, recognition methods and system
CN107871134A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN109753875A (en) * 2018-11-28 2019-05-14 北京的卢深视科技有限公司 Face identification method, device and electronic equipment based on face character perception loss
CN110060205A (en) * 2019-05-08 2019-07-26 北京迈格威科技有限公司 Image processing method and device, storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127170A (en) * 2016-07-01 2016-11-16 重庆中科云丛科技有限公司 A kind of merge the training method of key feature points, recognition methods and system
CN107871134A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN109753875A (en) * 2018-11-28 2019-05-14 北京的卢深视科技有限公司 Face identification method, device and electronic equipment based on face character perception loss
CN110060205A (en) * 2019-05-08 2019-07-26 北京迈格威科技有限公司 Image processing method and device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221767A (en) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 Method for training living body face recognition model and method for recognizing living body face and related device
CN113221767B (en) * 2021-05-18 2023-08-04 北京百度网讯科技有限公司 Method for training living body face recognition model and recognizing living body face and related device

Similar Documents

Publication Publication Date Title
EP3564854B1 (en) Facial expression recognition method, apparatus, electronic device, and storage medium
US20220051025A1 (en) Video classification method and apparatus, model training method and apparatus, device, and storage medium
WO2020103700A1 (en) Image recognition method based on micro facial expressions, apparatus and related device
US20190392587A1 (en) System for predicting articulated object feature location
JP7165742B2 (en) LIFE DETECTION METHOD AND DEVICE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
CN109829396B (en) Face recognition motion blur processing method, device, equipment and storage medium
CN111310705A (en) Image recognition method and device, computer equipment and storage medium
CN110222607B (en) Method, device and system for detecting key points of human face
CN108388889B (en) Method and device for analyzing face image
CN113869229B (en) Deep learning expression recognition method based on priori attention mechanism guidance
CN111626126A (en) Face emotion recognition method, device, medium and electronic equipment
EP4024270A1 (en) Gesture recognition method, electronic device, computer-readable storage medium, and chip
US10133955B2 (en) Systems and methods for object recognition based on human visual pathway
WO2022227765A1 (en) Method for generating image inpainting model, and device, medium and program product
CN114549557A (en) Portrait segmentation network training method, device, equipment and medium
Shah et al. Efficient portable camera based text to speech converter for blind person
CN111950700A (en) Neural network optimization method and related equipment
JP2022133378A (en) Face biological detection method, device, electronic apparatus, and storage medium
CN113570689B (en) Portrait cartoon method, device, medium and computing equipment
CN111144374B (en) Facial expression recognition method and device, storage medium and electronic equipment
CN112580395A (en) Depth information-based 3D face living body recognition method, system, device and medium
CN116630599A (en) Method for generating post-orthodontic predicted pictures
CN111862061A (en) Method, system, device and medium for evaluating aesthetic quality of picture
CN112784631A (en) Method for recognizing face emotion based on deep neural network
Ismail et al. From Gestures to Audio: A Dataset Building Approach for Egyptian Sign Language Translation to Arabic Speech

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination